Apr 30 03:29:01.171341 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:29:01.171382 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:29:01.171401 kernel: BIOS-provided physical RAM map: Apr 30 03:29:01.171413 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:29:01.171424 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 30 03:29:01.171437 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 30 03:29:01.171453 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 30 03:29:01.171466 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 30 03:29:01.171479 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 30 03:29:01.171495 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 30 03:29:01.171508 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 30 03:29:01.171522 kernel: NX (Execute Disable) protection: active Apr 30 03:29:01.171534 kernel: APIC: Static calls initialized Apr 30 03:29:01.171548 kernel: efi: EFI v2.7 by EDK II Apr 30 03:29:01.171565 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Apr 30 03:29:01.171581 kernel: SMBIOS 2.7 present. Apr 30 03:29:01.171593 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 30 03:29:01.171605 kernel: Hypervisor detected: KVM Apr 30 03:29:01.171616 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:29:01.171630 kernel: kvm-clock: using sched offset of 3581116175 cycles Apr 30 03:29:01.171643 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:29:01.171656 kernel: tsc: Detected 2499.996 MHz processor Apr 30 03:29:01.171667 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:29:01.171678 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:29:01.171690 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 30 03:29:01.171706 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:29:01.171719 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:29:01.171732 kernel: Using GB pages for direct mapping Apr 30 03:29:01.171745 kernel: Secure boot disabled Apr 30 03:29:01.172730 kernel: ACPI: Early table checksum verification disabled Apr 30 03:29:01.172768 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 30 03:29:01.172783 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 30 03:29:01.172795 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 30 03:29:01.172806 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 30 03:29:01.172824 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 30 03:29:01.172835 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 30 03:29:01.172847 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 30 03:29:01.172858 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 30 03:29:01.172870 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 30 03:29:01.172883 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 30 03:29:01.172902 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 03:29:01.172920 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 03:29:01.172933 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 30 03:29:01.172948 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 30 03:29:01.172962 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 30 03:29:01.172976 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 30 03:29:01.172989 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 30 03:29:01.173006 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 30 03:29:01.173019 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 30 03:29:01.173033 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 30 03:29:01.173048 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 30 03:29:01.173062 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 30 03:29:01.173078 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Apr 30 03:29:01.173093 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 30 03:29:01.173107 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:29:01.173120 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:29:01.173135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 30 03:29:01.173155 kernel: NUMA: Initialized distance table, cnt=1 Apr 30 03:29:01.173170 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Apr 30 03:29:01.173186 kernel: Zone ranges: Apr 30 03:29:01.173203 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:29:01.173220 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 30 03:29:01.173235 kernel: Normal empty Apr 30 03:29:01.173248 kernel: Movable zone start for each node Apr 30 03:29:01.173262 kernel: Early memory node ranges Apr 30 03:29:01.173275 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:29:01.173292 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 30 03:29:01.173306 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 30 03:29:01.173319 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 30 03:29:01.173333 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:29:01.173347 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:29:01.173361 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 30 03:29:01.173375 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 30 03:29:01.173389 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 30 03:29:01.173403 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:29:01.173420 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 30 03:29:01.173433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:29:01.173447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:29:01.173461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:29:01.173474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:29:01.173488 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:29:01.173502 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:29:01.173516 kernel: TSC deadline timer available Apr 30 03:29:01.173530 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:29:01.173544 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:29:01.173561 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 30 03:29:01.173574 kernel: Booting paravirtualized kernel on KVM Apr 30 03:29:01.173588 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:29:01.173602 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:29:01.173616 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:29:01.173630 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:29:01.173643 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:29:01.173657 kernel: kvm-guest: PV spinlocks enabled Apr 30 03:29:01.173671 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:29:01.173692 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:29:01.173707 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:29:01.173720 kernel: random: crng init done Apr 30 03:29:01.173734 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:29:01.173748 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:29:01.173829 kernel: Fallback order for Node 0: 0 Apr 30 03:29:01.173844 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 30 03:29:01.173862 kernel: Policy zone: DMA32 Apr 30 03:29:01.173876 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:29:01.173890 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 162936K reserved, 0K cma-reserved) Apr 30 03:29:01.173905 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:29:01.173918 kernel: Kernel/User page tables isolation: enabled Apr 30 03:29:01.173933 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:29:01.173947 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:29:01.173961 kernel: Dynamic Preempt: voluntary Apr 30 03:29:01.173976 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:29:01.173994 kernel: rcu: RCU event tracing is enabled. Apr 30 03:29:01.174008 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:29:01.174023 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:29:01.174037 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:29:01.174052 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:29:01.174067 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:29:01.174081 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:29:01.174099 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:29:01.174125 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:29:01.174140 kernel: Console: colour dummy device 80x25 Apr 30 03:29:01.174156 kernel: printk: console [tty0] enabled Apr 30 03:29:01.174171 kernel: printk: console [ttyS0] enabled Apr 30 03:29:01.174186 kernel: ACPI: Core revision 20230628 Apr 30 03:29:01.174205 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 30 03:29:01.174221 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:29:01.174237 kernel: x2apic enabled Apr 30 03:29:01.174252 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:29:01.174269 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 30 03:29:01.174288 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 30 03:29:01.174303 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:29:01.174316 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:29:01.174332 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:29:01.174345 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:29:01.174360 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:29:01.174376 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:29:01.174390 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:29:01.174405 kernel: RETBleed: Vulnerable Apr 30 03:29:01.174424 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:29:01.174438 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:29:01.174453 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:29:01.174468 kernel: GDS: Unknown: Dependent on hypervisor status Apr 30 03:29:01.174483 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:29:01.174497 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:29:01.174513 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:29:01.174527 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 03:29:01.174542 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 03:29:01.174556 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:29:01.174571 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:29:01.174589 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:29:01.174603 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 30 03:29:01.174618 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:29:01.174631 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 03:29:01.174646 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 03:29:01.174660 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 30 03:29:01.174674 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 30 03:29:01.174688 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 30 03:29:01.174703 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 30 03:29:01.174717 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 30 03:29:01.174733 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:29:01.174749 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:29:01.174782 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:29:01.174795 kernel: landlock: Up and running. Apr 30 03:29:01.174810 kernel: SELinux: Initializing. Apr 30 03:29:01.174826 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:29:01.174841 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:29:01.174855 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:29:01.174937 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:29:01.174953 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:29:01.174968 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:29:01.174984 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:29:01.175003 kernel: signal: max sigframe size: 3632 Apr 30 03:29:01.175019 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:29:01.175034 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:29:01.175050 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:29:01.175067 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:29:01.175082 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:29:01.175095 kernel: .... node #0, CPUs: #1 Apr 30 03:29:01.175110 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 30 03:29:01.175128 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:29:01.175286 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:29:01.175302 kernel: smpboot: Max logical packages: 1 Apr 30 03:29:01.175324 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 30 03:29:01.175338 kernel: devtmpfs: initialized Apr 30 03:29:01.175351 kernel: x86/mm: Memory block size: 128MB Apr 30 03:29:01.175365 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 30 03:29:01.175379 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:29:01.175394 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:29:01.175414 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:29:01.175430 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:29:01.175449 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:29:01.175464 kernel: audit: type=2000 audit(1745983740.977:1): state=initialized audit_enabled=0 res=1 Apr 30 03:29:01.175481 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:29:01.175497 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:29:01.175515 kernel: cpuidle: using governor menu Apr 30 03:29:01.175532 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:29:01.175550 kernel: dca service started, version 1.12.1 Apr 30 03:29:01.175572 kernel: PCI: Using configuration type 1 for base access Apr 30 03:29:01.175589 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:29:01.175603 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:29:01.175619 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:29:01.175636 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:29:01.175653 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:29:01.175668 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:29:01.175685 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:29:01.175703 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:29:01.175726 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:29:01.175744 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 30 03:29:01.179166 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:29:01.179194 kernel: ACPI: Interpreter enabled Apr 30 03:29:01.179208 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:29:01.179223 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:29:01.179238 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:29:01.179252 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:29:01.179267 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 03:29:01.179282 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:29:01.179530 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:29:01.179795 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 03:29:01.179942 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 03:29:01.179964 kernel: acpiphp: Slot [3] registered Apr 30 03:29:01.179981 kernel: acpiphp: Slot [4] registered Apr 30 03:29:01.179995 kernel: acpiphp: Slot [5] registered Apr 30 03:29:01.180012 kernel: acpiphp: Slot [6] registered Apr 30 03:29:01.180035 kernel: acpiphp: Slot [7] registered Apr 30 03:29:01.180051 kernel: acpiphp: Slot [8] registered Apr 30 03:29:01.180066 kernel: acpiphp: Slot [9] registered Apr 30 03:29:01.180084 kernel: acpiphp: Slot [10] registered Apr 30 03:29:01.180100 kernel: acpiphp: Slot [11] registered Apr 30 03:29:01.180117 kernel: acpiphp: Slot [12] registered Apr 30 03:29:01.180133 kernel: acpiphp: Slot [13] registered Apr 30 03:29:01.180150 kernel: acpiphp: Slot [14] registered Apr 30 03:29:01.180166 kernel: acpiphp: Slot [15] registered Apr 30 03:29:01.180186 kernel: acpiphp: Slot [16] registered Apr 30 03:29:01.180202 kernel: acpiphp: Slot [17] registered Apr 30 03:29:01.180218 kernel: acpiphp: Slot [18] registered Apr 30 03:29:01.180234 kernel: acpiphp: Slot [19] registered Apr 30 03:29:01.180250 kernel: acpiphp: Slot [20] registered Apr 30 03:29:01.180267 kernel: acpiphp: Slot [21] registered Apr 30 03:29:01.180283 kernel: acpiphp: Slot [22] registered Apr 30 03:29:01.180299 kernel: acpiphp: Slot [23] registered Apr 30 03:29:01.180315 kernel: acpiphp: Slot [24] registered Apr 30 03:29:01.180329 kernel: acpiphp: Slot [25] registered Apr 30 03:29:01.180348 kernel: acpiphp: Slot [26] registered Apr 30 03:29:01.180365 kernel: acpiphp: Slot [27] registered Apr 30 03:29:01.180382 kernel: acpiphp: Slot [28] registered Apr 30 03:29:01.180398 kernel: acpiphp: Slot [29] registered Apr 30 03:29:01.180415 kernel: acpiphp: Slot [30] registered Apr 30 03:29:01.180431 kernel: acpiphp: Slot [31] registered Apr 30 03:29:01.180446 kernel: PCI host bridge to bus 0000:00 Apr 30 03:29:01.180588 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:29:01.180720 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:29:01.181331 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:29:01.181465 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 03:29:01.181582 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 30 03:29:01.181703 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:29:01.183074 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 03:29:01.183332 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 03:29:01.183514 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 30 03:29:01.183670 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 30 03:29:01.185006 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 30 03:29:01.185171 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 30 03:29:01.185368 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 30 03:29:01.185510 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 30 03:29:01.185651 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 30 03:29:01.187770 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 30 03:29:01.187960 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 30 03:29:01.188107 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 30 03:29:01.188243 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 30 03:29:01.188389 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 30 03:29:01.188547 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:29:01.188719 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 30 03:29:01.188890 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 30 03:29:01.189051 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 30 03:29:01.189193 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 30 03:29:01.189212 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:29:01.189227 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:29:01.189242 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:29:01.189256 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:29:01.189276 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 03:29:01.189290 kernel: iommu: Default domain type: Translated Apr 30 03:29:01.189304 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:29:01.189319 kernel: efivars: Registered efivars operations Apr 30 03:29:01.189333 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:29:01.189348 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:29:01.189363 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 30 03:29:01.189377 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 30 03:29:01.189510 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 30 03:29:01.189645 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 30 03:29:01.189793 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:29:01.189812 kernel: vgaarb: loaded Apr 30 03:29:01.189828 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 30 03:29:01.189845 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 30 03:29:01.189862 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:29:01.189879 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:29:01.189896 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:29:01.189917 kernel: pnp: PnP ACPI init Apr 30 03:29:01.189933 kernel: pnp: PnP ACPI: found 5 devices Apr 30 03:29:01.189951 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:29:01.189968 kernel: NET: Registered PF_INET protocol family Apr 30 03:29:01.189985 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:29:01.190002 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:29:01.190019 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:29:01.190037 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:29:01.190054 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:29:01.190074 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:29:01.190091 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:29:01.190108 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:29:01.190124 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:29:01.190141 kernel: NET: Registered PF_XDP protocol family Apr 30 03:29:01.190277 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:29:01.190506 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:29:01.190638 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:29:01.190838 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 03:29:01.190964 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 30 03:29:01.191109 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 03:29:01.191131 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:29:01.191148 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:29:01.191164 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 30 03:29:01.191181 kernel: clocksource: Switched to clocksource tsc Apr 30 03:29:01.191197 kernel: Initialise system trusted keyrings Apr 30 03:29:01.191213 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:29:01.191235 kernel: Key type asymmetric registered Apr 30 03:29:01.191251 kernel: Asymmetric key parser 'x509' registered Apr 30 03:29:01.191267 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:29:01.191283 kernel: io scheduler mq-deadline registered Apr 30 03:29:01.191299 kernel: io scheduler kyber registered Apr 30 03:29:01.191315 kernel: io scheduler bfq registered Apr 30 03:29:01.191343 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:29:01.191359 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:29:01.191375 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:29:01.191395 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:29:01.191412 kernel: i8042: Warning: Keylock active Apr 30 03:29:01.191427 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:29:01.191443 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:29:01.191659 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 30 03:29:01.191829 kernel: rtc_cmos 00:00: registered as rtc0 Apr 30 03:29:01.191959 kernel: rtc_cmos 00:00: setting system clock to 2025-04-30T03:29:00 UTC (1745983740) Apr 30 03:29:01.192090 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 30 03:29:01.192110 kernel: intel_pstate: CPU model not supported Apr 30 03:29:01.192127 kernel: efifb: probing for efifb Apr 30 03:29:01.192143 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 30 03:29:01.192159 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 30 03:29:01.192461 kernel: efifb: scrolling: redraw Apr 30 03:29:01.192481 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:29:01.192498 kernel: Console: switching to colour frame buffer device 100x37 Apr 30 03:29:01.192515 kernel: fb0: EFI VGA frame buffer device Apr 30 03:29:01.192531 kernel: pstore: Using crash dump compression: deflate Apr 30 03:29:01.192553 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:29:01.192569 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:29:01.192585 kernel: Segment Routing with IPv6 Apr 30 03:29:01.192602 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:29:01.192618 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:29:01.192635 kernel: Key type dns_resolver registered Apr 30 03:29:01.192674 kernel: IPI shorthand broadcast: enabled Apr 30 03:29:01.192694 kernel: sched_clock: Marking stable (487001941, 135950926)->(690193901, -67241034) Apr 30 03:29:01.192711 kernel: registered taskstats version 1 Apr 30 03:29:01.192732 kernel: Loading compiled-in X.509 certificates Apr 30 03:29:01.192749 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:29:01.192799 kernel: Key type .fscrypt registered Apr 30 03:29:01.192816 kernel: Key type fscrypt-provisioning registered Apr 30 03:29:01.192833 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:29:01.192850 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:29:01.192867 kernel: ima: No architecture policies found Apr 30 03:29:01.192884 kernel: clk: Disabling unused clocks Apr 30 03:29:01.192904 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:29:01.192921 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:29:01.192938 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:29:01.192992 kernel: Run /init as init process Apr 30 03:29:01.193009 kernel: with arguments: Apr 30 03:29:01.193026 kernel: /init Apr 30 03:29:01.193043 kernel: with environment: Apr 30 03:29:01.193059 kernel: HOME=/ Apr 30 03:29:01.193076 kernel: TERM=linux Apr 30 03:29:01.193093 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:29:01.193118 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:29:01.193139 systemd[1]: Detected virtualization amazon. Apr 30 03:29:01.193157 systemd[1]: Detected architecture x86-64. Apr 30 03:29:01.193173 systemd[1]: Running in initrd. Apr 30 03:29:01.193191 systemd[1]: No hostname configured, using default hostname. Apr 30 03:29:01.193207 systemd[1]: Hostname set to . Apr 30 03:29:01.193229 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:29:01.193246 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:29:01.193263 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:29:01.193280 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:29:01.193299 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:29:01.193317 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:29:01.193335 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:29:01.193357 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:29:01.193377 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:29:01.193395 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:29:01.193413 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:29:01.193490 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:29:01.193514 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:29:01.193531 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:29:01.193549 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:29:01.193567 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:29:01.193585 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:29:01.193602 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:29:01.193621 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:29:01.193638 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:29:01.193656 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:29:01.193678 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:29:01.193696 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:29:01.193713 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:29:01.193731 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:29:01.193749 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:29:01.193787 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:29:01.193805 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:29:01.193823 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:29:01.193845 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:29:01.193863 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:29:01.193881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:29:01.193938 systemd-journald[178]: Collecting audit messages is disabled. Apr 30 03:29:01.193979 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:29:01.193997 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:29:01.194015 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:29:01.194034 systemd-journald[178]: Journal started Apr 30 03:29:01.194073 systemd-journald[178]: Runtime Journal (/run/log/journal/ec264577a1902490fd6984c6f5b62f35) is 4.7M, max 38.2M, 33.4M free. Apr 30 03:29:01.201795 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:29:01.201493 systemd-modules-load[179]: Inserted module 'overlay' Apr 30 03:29:01.214185 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:29:01.230664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:29:01.234120 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:29:01.244205 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:29:01.252786 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:29:01.250894 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:29:01.259778 kernel: Bridge firewalling registered Apr 30 03:29:01.266861 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 30 03:29:01.280724 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:29:01.281859 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:29:01.285788 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:29:01.296047 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:29:01.309773 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:29:01.316303 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:29:01.317334 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:29:01.322990 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:29:01.327975 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:29:01.341827 dracut-cmdline[212]: dracut-dracut-053 Apr 30 03:29:01.346060 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:29:01.380710 systemd-resolved[213]: Positive Trust Anchors: Apr 30 03:29:01.380733 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:29:01.380807 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:29:01.391044 systemd-resolved[213]: Defaulting to hostname 'linux'. Apr 30 03:29:01.392588 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:29:01.394258 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:29:01.447797 kernel: SCSI subsystem initialized Apr 30 03:29:01.458780 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:29:01.470787 kernel: iscsi: registered transport (tcp) Apr 30 03:29:01.492971 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:29:01.493052 kernel: QLogic iSCSI HBA Driver Apr 30 03:29:01.533798 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:29:01.540938 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:29:01.567242 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:29:01.567381 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:29:01.567407 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:29:01.610800 kernel: raid6: avx512x4 gen() 17808 MB/s Apr 30 03:29:01.628783 kernel: raid6: avx512x2 gen() 17731 MB/s Apr 30 03:29:01.646783 kernel: raid6: avx512x1 gen() 17730 MB/s Apr 30 03:29:01.664779 kernel: raid6: avx2x4 gen() 17645 MB/s Apr 30 03:29:01.681784 kernel: raid6: avx2x2 gen() 17761 MB/s Apr 30 03:29:01.699001 kernel: raid6: avx2x1 gen() 13655 MB/s Apr 30 03:29:01.699057 kernel: raid6: using algorithm avx512x4 gen() 17808 MB/s Apr 30 03:29:01.718826 kernel: raid6: .... xor() 7699 MB/s, rmw enabled Apr 30 03:29:01.718895 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:29:01.740789 kernel: xor: automatically using best checksumming function avx Apr 30 03:29:01.902789 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:29:01.915626 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:29:01.920985 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:29:01.936394 systemd-udevd[396]: Using default interface naming scheme 'v255'. Apr 30 03:29:01.941480 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:29:01.951949 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:29:01.966455 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Apr 30 03:29:01.996394 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:29:02.001999 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:29:02.052342 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:29:02.063048 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:29:02.086872 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:29:02.089567 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:29:02.091523 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:29:02.092805 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:29:02.098992 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:29:02.128983 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:29:02.145723 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 30 03:29:02.175028 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 30 03:29:02.175223 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 30 03:29:02.175405 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:b3:45:bd:b7:a1 Apr 30 03:29:02.175571 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:29:02.177638 (udev-worker)[441]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:02.186639 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:29:02.186824 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:29:02.190357 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:29:02.190393 kernel: AES CTR mode by8 optimization enabled Apr 30 03:29:02.194024 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:29:02.196473 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:29:02.196708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:29:02.199528 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:29:02.211072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:29:02.219779 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 30 03:29:02.223782 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 03:29:02.230825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:29:02.231880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:29:02.243918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:29:02.245429 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 30 03:29:02.257801 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:29:02.257884 kernel: GPT:9289727 != 16777215 Apr 30 03:29:02.257903 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:29:02.257921 kernel: GPT:9289727 != 16777215 Apr 30 03:29:02.258990 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:29:02.259039 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:29:02.276603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:29:02.280952 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:29:02.299737 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:29:02.332992 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (452) Apr 30 03:29:02.351740 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 30 03:29:02.355799 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (442) Apr 30 03:29:02.384432 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 30 03:29:02.384936 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 30 03:29:02.392364 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 30 03:29:02.399989 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:29:02.408530 disk-uuid[627]: Primary Header is updated. Apr 30 03:29:02.408530 disk-uuid[627]: Secondary Entries is updated. Apr 30 03:29:02.408530 disk-uuid[627]: Secondary Header is updated. Apr 30 03:29:02.414864 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:29:02.419294 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:29:02.425793 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:29:02.757878 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 03:29:03.432975 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:29:03.433037 disk-uuid[628]: The operation has completed successfully. Apr 30 03:29:03.541009 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:29:03.541105 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:29:03.568051 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:29:03.572286 sh[970]: Success Apr 30 03:29:03.594780 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:29:03.698383 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:29:03.705016 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:29:03.707739 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:29:03.744673 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:29:03.744743 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:29:03.744785 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:29:03.746791 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:29:03.748023 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:29:03.777789 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:29:03.780980 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:29:03.782047 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:29:03.788960 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:29:03.790883 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:29:03.814839 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:29:03.814911 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:29:03.817746 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:29:03.825052 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:29:03.836145 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:29:03.838437 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:29:03.845625 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:29:03.854079 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:29:03.901854 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:29:03.913036 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:29:03.943277 systemd-networkd[1162]: lo: Link UP Apr 30 03:29:03.943289 systemd-networkd[1162]: lo: Gained carrier Apr 30 03:29:03.945281 systemd-networkd[1162]: Enumeration completed Apr 30 03:29:03.945730 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:29:03.945735 systemd-networkd[1162]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:29:03.945899 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:29:03.948285 systemd[1]: Reached target network.target - Network. Apr 30 03:29:03.949072 systemd-networkd[1162]: eth0: Link UP Apr 30 03:29:03.949078 systemd-networkd[1162]: eth0: Gained carrier Apr 30 03:29:03.949090 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:29:03.960858 systemd-networkd[1162]: eth0: DHCPv4 address 172.31.17.153/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 03:29:04.035585 ignition[1095]: Ignition 2.19.0 Apr 30 03:29:04.035599 ignition[1095]: Stage: fetch-offline Apr 30 03:29:04.035904 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:04.035918 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:29:04.036269 ignition[1095]: Ignition finished successfully Apr 30 03:29:04.038874 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:29:04.045952 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:29:04.061047 ignition[1173]: Ignition 2.19.0 Apr 30 03:29:04.061067 ignition[1173]: Stage: fetch Apr 30 03:29:04.061549 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:04.061563 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:29:04.061682 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:29:04.092320 ignition[1173]: PUT result: OK Apr 30 03:29:04.094409 ignition[1173]: parsed url from cmdline: "" Apr 30 03:29:04.094421 ignition[1173]: no config URL provided Apr 30 03:29:04.094431 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:29:04.094448 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:29:04.094473 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:29:04.095100 ignition[1173]: PUT result: OK Apr 30 03:29:04.095147 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 30 03:29:04.096167 ignition[1173]: GET result: OK Apr 30 03:29:04.096228 ignition[1173]: parsing config with SHA512: 3030d9dc0715c2b7213dd877e19b425ae2c7f999cbb1ade710815f8bdf13ca03a4a0a02ea8ccb2a0031d01f0e06f7a5b6e6bfc7743bbab77b66b6a648e6b4da9 Apr 30 03:29:04.100365 unknown[1173]: fetched base config from "system" Apr 30 03:29:04.100381 unknown[1173]: fetched base config from "system" Apr 30 03:29:04.100884 ignition[1173]: fetch: fetch complete Apr 30 03:29:04.100389 unknown[1173]: fetched user config from "aws" Apr 30 03:29:04.100892 ignition[1173]: fetch: fetch passed Apr 30 03:29:04.100951 ignition[1173]: Ignition finished successfully Apr 30 03:29:04.103128 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:29:04.107043 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:29:04.125123 ignition[1179]: Ignition 2.19.0 Apr 30 03:29:04.125137 ignition[1179]: Stage: kargs Apr 30 03:29:04.125620 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:04.125634 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:29:04.125781 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:29:04.126608 ignition[1179]: PUT result: OK Apr 30 03:29:04.129155 ignition[1179]: kargs: kargs passed Apr 30 03:29:04.129227 ignition[1179]: Ignition finished successfully Apr 30 03:29:04.131087 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:29:04.135996 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:29:04.151742 ignition[1186]: Ignition 2.19.0 Apr 30 03:29:04.151774 ignition[1186]: Stage: disks Apr 30 03:29:04.152245 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:04.152258 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:29:04.152380 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:29:04.153354 ignition[1186]: PUT result: OK Apr 30 03:29:04.156658 ignition[1186]: disks: disks passed Apr 30 03:29:04.156719 ignition[1186]: Ignition finished successfully Apr 30 03:29:04.158241 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:29:04.158771 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:29:04.159084 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:29:04.159682 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:29:04.160219 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:29:04.160871 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:29:04.165985 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:29:04.204430 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:29:04.208226 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:29:04.212937 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:29:04.310774 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:29:04.311064 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:29:04.312180 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:29:04.324884 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:29:04.327856 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:29:04.329040 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 03:29:04.329093 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:29:04.329117 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:29:04.334856 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:29:04.339957 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:29:04.346792 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1213) Apr 30 03:29:04.351485 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:29:04.351551 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:29:04.351565 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:29:04.365152 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:29:04.366228 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:29:04.414410 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:29:04.419729 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:29:04.424912 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:29:04.430140 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:29:04.652117 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:29:04.657045 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:29:04.661983 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:29:04.670790 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:29:04.692652 ignition[1330]: INFO : Ignition 2.19.0 Apr 30 03:29:04.693520 ignition[1330]: INFO : Stage: mount Apr 30 03:29:04.694487 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:04.694487 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:29:04.694487 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:29:04.696541 ignition[1330]: INFO : PUT result: OK Apr 30 03:29:04.698549 ignition[1330]: INFO : mount: mount passed Apr 30 03:29:04.698549 ignition[1330]: INFO : Ignition finished successfully Apr 30 03:29:04.700741 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:29:04.707935 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:29:04.708539 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:29:04.740929 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:29:04.746027 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:29:04.765804 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1343) Apr 30 03:29:04.771559 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:29:04.771642 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:29:04.771656 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:29:04.777788 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:29:04.779741 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:29:04.802866 ignition[1360]: INFO : Ignition 2.19.0 Apr 30 03:29:04.802866 ignition[1360]: INFO : Stage: files Apr 30 03:29:04.804656 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:04.804656 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:29:04.804656 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:29:04.804656 ignition[1360]: INFO : PUT result: OK Apr 30 03:29:04.807174 ignition[1360]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:29:04.808433 ignition[1360]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:29:04.808433 ignition[1360]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:29:04.814173 ignition[1360]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:29:04.815009 ignition[1360]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:29:04.815735 ignition[1360]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:29:04.815541 unknown[1360]: wrote ssh authorized keys file for user: core Apr 30 03:29:04.817898 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:29:04.818586 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:29:04.818586 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:29:04.818586 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:29:04.818586 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:29:04.818586 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:29:04.818586 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:29:04.818586 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:29:04.818586 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:29:04.818586 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:29:05.139138 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Apr 30 03:29:05.573501 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:29:05.573501 ignition[1360]: INFO : files: op(8): [started] processing unit "containerd.service" Apr 30 03:29:05.575272 ignition[1360]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:29:05.575272 ignition[1360]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:29:05.575272 ignition[1360]: INFO : files: op(8): [finished] processing unit "containerd.service" Apr 30 03:29:05.575272 ignition[1360]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:29:05.575272 ignition[1360]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:29:05.575272 ignition[1360]: INFO : files: files passed Apr 30 03:29:05.579706 ignition[1360]: INFO : Ignition finished successfully Apr 30 03:29:05.576619 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:29:05.584060 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:29:05.585952 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:29:05.588269 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:29:05.588787 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:29:05.600927 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:29:05.600927 initrd-setup-root-after-ignition[1388]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:29:05.603194 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:29:05.605093 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:29:05.605704 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:29:05.609944 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:29:05.644646 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:29:05.644809 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:29:05.646096 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:29:05.647220 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:29:05.648224 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:29:05.659024 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:29:05.664910 systemd-networkd[1162]: eth0: Gained IPv6LL Apr 30 03:29:05.673302 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:29:05.679991 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:29:05.691740 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:29:05.692953 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:29:05.693621 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:29:05.694496 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:29:05.694678 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:29:05.695961 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:29:05.696810 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:29:05.697580 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:29:05.698333 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:29:05.699084 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:29:05.699937 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:29:05.700680 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:29:05.701450 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:29:05.702547 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:29:05.703288 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:29:05.704092 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:29:05.704272 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:29:05.705337 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:29:05.706127 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:29:05.706792 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:29:05.706943 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:29:05.707640 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:29:05.707832 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:29:05.709193 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:29:05.709378 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:29:05.710074 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:29:05.710224 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:29:05.718052 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:29:05.723114 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:29:05.724431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:29:05.725509 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:29:05.726296 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:29:05.726459 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:29:05.738066 ignition[1412]: INFO : Ignition 2.19.0 Apr 30 03:29:05.741789 ignition[1412]: INFO : Stage: umount Apr 30 03:29:05.741789 ignition[1412]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:05.741789 ignition[1412]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:29:05.741789 ignition[1412]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:29:05.739049 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:29:05.748171 ignition[1412]: INFO : PUT result: OK Apr 30 03:29:05.739203 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:29:05.749707 ignition[1412]: INFO : umount: umount passed Apr 30 03:29:05.750554 ignition[1412]: INFO : Ignition finished successfully Apr 30 03:29:05.752925 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:29:05.753088 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:29:05.754912 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:29:05.754976 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:29:05.755522 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:29:05.755579 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:29:05.756242 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:29:05.756388 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:29:05.759241 systemd[1]: Stopped target network.target - Network. Apr 30 03:29:05.759685 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:29:05.759750 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:29:05.760258 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:29:05.760656 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:29:05.764831 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:29:05.765182 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:29:05.766115 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:29:05.766735 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:29:05.766799 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:29:05.767282 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:29:05.767317 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:29:05.768113 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:29:05.768166 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:29:05.768679 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:29:05.768722 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:29:05.770038 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:29:05.770873 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:29:05.772988 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:29:05.774818 systemd-networkd[1162]: eth0: DHCPv6 lease lost Apr 30 03:29:05.776667 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:29:05.776797 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:29:05.777720 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:29:05.777769 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:29:05.785489 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:29:05.785957 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:29:05.786024 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:29:05.786539 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:29:05.787196 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:29:05.787300 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:29:05.797337 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:29:05.797487 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:29:05.799876 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:29:05.799926 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:29:05.800647 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:29:05.800683 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:29:05.801917 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:29:05.801973 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:29:05.803104 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:29:05.803160 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:29:05.804564 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:29:05.804612 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:29:05.810977 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:29:05.811425 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:29:05.811484 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:29:05.811943 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:29:05.811982 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:29:05.812365 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:29:05.812401 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:29:05.812747 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:29:05.814832 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:29:05.815978 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:29:05.816019 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:29:05.816697 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:29:05.816737 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:29:05.817686 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:29:05.817724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:29:05.818438 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:29:05.818523 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:29:05.819264 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:29:05.819355 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:29:05.895882 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:29:05.896025 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:29:05.897243 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:29:05.897860 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:29:05.897938 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:29:05.906017 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:29:05.918593 systemd[1]: Switching root. Apr 30 03:29:05.945281 systemd-journald[178]: Journal stopped Apr 30 03:29:07.178729 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 30 03:29:07.181125 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:29:07.181154 kernel: SELinux: policy capability open_perms=1 Apr 30 03:29:07.181186 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:29:07.181205 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:29:07.181224 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:29:07.181250 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:29:07.181269 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:29:07.181289 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:29:07.181307 kernel: audit: type=1403 audit(1745983746.220:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:29:07.181337 systemd[1]: Successfully loaded SELinux policy in 41.567ms. Apr 30 03:29:07.181377 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.218ms. Apr 30 03:29:07.181402 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:29:07.181427 systemd[1]: Detected virtualization amazon. Apr 30 03:29:07.181450 systemd[1]: Detected architecture x86-64. Apr 30 03:29:07.181474 systemd[1]: Detected first boot. Apr 30 03:29:07.181498 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:29:07.181521 zram_generator::config[1474]: No configuration found. Apr 30 03:29:07.181552 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:29:07.181576 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:29:07.181610 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 30 03:29:07.181634 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:29:07.181658 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:29:07.181681 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:29:07.181704 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:29:07.181724 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:29:07.181746 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:29:07.181780 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:29:07.181806 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:29:07.181829 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:29:07.181851 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:29:07.181873 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:29:07.181894 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:29:07.181916 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:29:07.181938 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:29:07.181960 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:29:07.181984 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:29:07.182008 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:29:07.182030 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:29:07.182051 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:29:07.182073 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:29:07.182094 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:29:07.182116 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:29:07.182138 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:29:07.182159 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:29:07.182182 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:29:07.182202 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:29:07.182223 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:29:07.182244 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:29:07.182266 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:29:07.182289 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:29:07.182310 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:29:07.182331 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:29:07.182353 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:29:07.182377 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:29:07.182399 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:29:07.182420 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:29:07.182443 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:29:07.182465 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:29:07.182486 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:29:07.182508 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:29:07.182529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:29:07.182551 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:29:07.182575 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:29:07.182596 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:29:07.182617 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:29:07.182640 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:29:07.182662 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 03:29:07.182683 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 03:29:07.182704 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:29:07.182725 kernel: loop: module loaded Apr 30 03:29:07.182749 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:29:07.183371 kernel: fuse: init (API version 7.39) Apr 30 03:29:07.183397 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:29:07.183420 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:29:07.183443 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:29:07.183467 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:29:07.183488 kernel: ACPI: bus type drm_connector registered Apr 30 03:29:07.183510 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:29:07.183574 systemd-journald[1585]: Collecting audit messages is disabled. Apr 30 03:29:07.183618 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:29:07.183639 systemd-journald[1585]: Journal started Apr 30 03:29:07.183679 systemd-journald[1585]: Runtime Journal (/run/log/journal/ec264577a1902490fd6984c6f5b62f35) is 4.7M, max 38.2M, 33.4M free. Apr 30 03:29:07.187660 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:29:07.187963 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:29:07.188633 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:29:07.189391 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:29:07.190130 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:29:07.191151 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:29:07.192199 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:29:07.193163 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:29:07.193393 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:29:07.194436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:29:07.194683 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:29:07.195698 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:29:07.195957 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:29:07.197197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:29:07.197440 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:29:07.199585 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:29:07.200214 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:29:07.201720 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:29:07.201952 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:29:07.203161 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:29:07.204295 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:29:07.205399 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:29:07.223573 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:29:07.230967 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:29:07.238895 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:29:07.240965 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:29:07.252068 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:29:07.272255 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:29:07.274681 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:29:07.277130 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:29:07.278872 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:29:07.285041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:29:07.294255 systemd-journald[1585]: Time spent on flushing to /var/log/journal/ec264577a1902490fd6984c6f5b62f35 is 83.375ms for 955 entries. Apr 30 03:29:07.294255 systemd-journald[1585]: System Journal (/var/log/journal/ec264577a1902490fd6984c6f5b62f35) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:29:07.388438 systemd-journald[1585]: Received client request to flush runtime journal. Apr 30 03:29:07.301963 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:29:07.309773 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:29:07.315031 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:29:07.317022 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:29:07.331973 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:29:07.340385 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:29:07.341222 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:29:07.373838 udevadm[1628]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:29:07.379300 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:29:07.390646 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:29:07.402182 systemd-tmpfiles[1623]: ACLs are not supported, ignoring. Apr 30 03:29:07.402209 systemd-tmpfiles[1623]: ACLs are not supported, ignoring. Apr 30 03:29:07.410461 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:29:07.421079 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:29:07.469455 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:29:07.481104 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:29:07.506490 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Apr 30 03:29:07.506967 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Apr 30 03:29:07.514351 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:29:08.054577 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:29:08.068049 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:29:08.092235 systemd-udevd[1651]: Using default interface naming scheme 'v255'. Apr 30 03:29:08.134566 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:29:08.144927 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:29:08.162935 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:29:08.204670 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 30 03:29:08.205603 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:29:08.224326 (udev-worker)[1666]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:08.245780 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 03:29:08.250781 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:29:08.252783 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Apr 30 03:29:08.255126 kernel: ACPI: button: Sleep Button [SLPF] Apr 30 03:29:08.262776 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 30 03:29:08.297911 systemd-networkd[1654]: lo: Link UP Apr 30 03:29:08.298224 systemd-networkd[1654]: lo: Gained carrier Apr 30 03:29:08.300248 systemd-networkd[1654]: Enumeration completed Apr 30 03:29:08.300484 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:29:08.302689 systemd-networkd[1654]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:29:08.302697 systemd-networkd[1654]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:29:08.309976 systemd-networkd[1654]: eth0: Link UP Apr 30 03:29:08.311046 systemd-networkd[1654]: eth0: Gained carrier Apr 30 03:29:08.311920 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:29:08.312452 systemd-networkd[1654]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:29:08.321011 systemd-networkd[1654]: eth0: DHCPv4 address 172.31.17.153/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 03:29:08.325811 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 03:29:08.343777 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1661) Apr 30 03:29:08.368520 systemd-networkd[1654]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:29:08.388071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:29:08.396420 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:29:08.398175 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:29:08.398402 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:29:08.408938 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:29:08.476939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 03:29:08.477938 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:29:08.483020 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:29:08.502974 lvm[1773]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:29:08.518901 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:29:08.523569 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:29:08.524615 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:29:08.529009 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:29:08.536413 lvm[1780]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:29:08.565210 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:29:08.566779 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:29:08.567518 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:29:08.567555 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:29:08.568161 systemd[1]: Reached target machines.target - Containers. Apr 30 03:29:08.570195 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:29:08.577939 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:29:08.581951 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:29:08.582939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:29:08.588930 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:29:08.592366 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:29:08.597601 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:29:08.604538 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:29:08.620843 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:29:08.628782 kernel: loop0: detected capacity change from 0 to 210664 Apr 30 03:29:08.642627 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:29:08.644814 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:29:08.725777 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:29:08.750821 kernel: loop1: detected capacity change from 0 to 140768 Apr 30 03:29:08.824778 kernel: loop2: detected capacity change from 0 to 142488 Apr 30 03:29:08.877789 kernel: loop3: detected capacity change from 0 to 61336 Apr 30 03:29:08.922788 kernel: loop4: detected capacity change from 0 to 210664 Apr 30 03:29:08.957714 kernel: loop5: detected capacity change from 0 to 140768 Apr 30 03:29:08.991838 kernel: loop6: detected capacity change from 0 to 142488 Apr 30 03:29:09.021780 kernel: loop7: detected capacity change from 0 to 61336 Apr 30 03:29:09.051599 (sd-merge)[1801]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 30 03:29:09.052464 (sd-merge)[1801]: Merged extensions into '/usr'. Apr 30 03:29:09.060746 systemd[1]: Reloading requested from client PID 1788 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:29:09.060937 systemd[1]: Reloading... Apr 30 03:29:09.124776 zram_generator::config[1829]: No configuration found. Apr 30 03:29:09.217613 ldconfig[1784]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:29:09.297339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:09.368814 systemd[1]: Reloading finished in 307 ms. Apr 30 03:29:09.386799 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:29:09.388135 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:29:09.397964 systemd[1]: Starting ensure-sysext.service... Apr 30 03:29:09.407618 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:29:09.423357 systemd[1]: Reloading requested from client PID 1888 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:29:09.423546 systemd[1]: Reloading... Apr 30 03:29:09.440956 systemd-tmpfiles[1889]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:29:09.441500 systemd-tmpfiles[1889]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:29:09.443676 systemd-tmpfiles[1889]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:29:09.444310 systemd-tmpfiles[1889]: ACLs are not supported, ignoring. Apr 30 03:29:09.444505 systemd-tmpfiles[1889]: ACLs are not supported, ignoring. Apr 30 03:29:09.450457 systemd-tmpfiles[1889]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:29:09.450471 systemd-tmpfiles[1889]: Skipping /boot Apr 30 03:29:09.462236 systemd-tmpfiles[1889]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:29:09.462253 systemd-tmpfiles[1889]: Skipping /boot Apr 30 03:29:09.505848 systemd-networkd[1654]: eth0: Gained IPv6LL Apr 30 03:29:09.533778 zram_generator::config[1920]: No configuration found. Apr 30 03:29:09.664195 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:09.738981 systemd[1]: Reloading finished in 314 ms. Apr 30 03:29:09.753404 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:29:09.760380 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:29:09.766690 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:29:09.769906 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:29:09.781939 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:29:09.786911 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:29:09.790982 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:29:09.800929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:29:09.801134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:29:09.808657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:29:09.813607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:29:09.822051 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:29:09.824981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:29:09.825123 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:29:09.827774 augenrules[2002]: No rules Apr 30 03:29:09.828176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:29:09.828360 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:29:09.834479 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:29:09.843701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:29:09.845056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:29:09.847962 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:29:09.849210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:29:09.851387 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:29:09.862789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:29:09.863134 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:29:09.872340 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:29:09.874334 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:29:09.887972 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:29:09.888356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:29:09.895107 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:29:09.908880 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:29:09.919431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:29:09.934133 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:29:09.935954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:29:09.936297 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:29:09.938416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:29:09.941148 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:29:09.944538 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:29:09.944789 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:29:09.945944 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:29:09.946167 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:29:09.949366 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:29:09.949629 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:29:09.954541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:29:09.956809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:29:09.961003 systemd[1]: Finished ensure-sysext.service. Apr 30 03:29:09.976234 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:29:09.976329 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:29:09.997009 systemd-resolved[1990]: Positive Trust Anchors: Apr 30 03:29:09.997028 systemd-resolved[1990]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:29:09.997118 systemd-resolved[1990]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:29:10.002529 systemd-resolved[1990]: Defaulting to hostname 'linux'. Apr 30 03:29:10.004960 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:29:10.005574 systemd[1]: Reached target network.target - Network. Apr 30 03:29:10.006029 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:29:10.006536 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:29:10.010871 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:29:10.011869 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:29:10.011926 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:29:10.012591 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:29:10.013183 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:29:10.014111 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:29:10.014813 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:29:10.015304 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:29:10.015850 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:29:10.015893 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:29:10.016374 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:29:10.018215 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:29:10.020318 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:29:10.022947 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:29:10.025252 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:29:10.025784 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:29:10.026218 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:29:10.026742 systemd[1]: System is tainted: cgroupsv1 Apr 30 03:29:10.026798 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:29:10.026820 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:29:10.028894 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:29:10.033985 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:29:10.036964 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:29:10.046002 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:29:10.059119 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:29:10.062205 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:29:10.086055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:10.091290 jq[2047]: false Apr 30 03:29:10.094000 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:29:10.105076 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 03:29:10.123291 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:29:10.137488 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 03:29:10.137941 extend-filesystems[2049]: Found loop4 Apr 30 03:29:10.140898 extend-filesystems[2049]: Found loop5 Apr 30 03:29:10.140898 extend-filesystems[2049]: Found loop6 Apr 30 03:29:10.140898 extend-filesystems[2049]: Found loop7 Apr 30 03:29:10.140898 extend-filesystems[2049]: Found nvme0n1 Apr 30 03:29:10.140898 extend-filesystems[2049]: Found nvme0n1p1 Apr 30 03:29:10.140898 extend-filesystems[2049]: Found nvme0n1p2 Apr 30 03:29:10.140898 extend-filesystems[2049]: Found nvme0n1p3 Apr 30 03:29:10.140898 extend-filesystems[2049]: Found usr Apr 30 03:29:10.140898 extend-filesystems[2049]: Found nvme0n1p4 Apr 30 03:29:10.140898 extend-filesystems[2049]: Found nvme0n1p6 Apr 30 03:29:10.140898 extend-filesystems[2049]: Found nvme0n1p7 Apr 30 03:29:10.140898 extend-filesystems[2049]: Found nvme0n1p9 Apr 30 03:29:10.140898 extend-filesystems[2049]: Checking size of /dev/nvme0n1p9 Apr 30 03:29:10.150934 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:29:10.168951 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:29:10.179068 dbus-daemon[2046]: [system] SELinux support is enabled Apr 30 03:29:10.182957 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:29:10.186960 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:29:10.194041 dbus-daemon[2046]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1654 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 03:29:10.205226 extend-filesystems[2049]: Resized partition /dev/nvme0n1p9 Apr 30 03:29:10.211968 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:29:10.214252 extend-filesystems[2079]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:29:10.226126 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 30 03:29:10.227878 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:29:10.233083 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:29:10.250055 coreos-metadata[2045]: Apr 30 03:29:10.248 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 03:29:10.255639 coreos-metadata[2045]: Apr 30 03:29:10.255 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 30 03:29:10.255059 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:29:10.255870 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:29:10.258601 coreos-metadata[2045]: Apr 30 03:29:10.257 INFO Fetch successful Apr 30 03:29:10.258601 coreos-metadata[2045]: Apr 30 03:29:10.257 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 30 03:29:10.264956 coreos-metadata[2045]: Apr 30 03:29:10.262 INFO Fetch successful Apr 30 03:29:10.264956 coreos-metadata[2045]: Apr 30 03:29:10.263 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 30 03:29:10.269973 coreos-metadata[2045]: Apr 30 03:29:10.268 INFO Fetch successful Apr 30 03:29:10.269973 coreos-metadata[2045]: Apr 30 03:29:10.269 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 30 03:29:10.271046 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:29:10.273296 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:29:10.279122 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:23 UTC 2025 (1): Starting Apr 30 03:29:10.279122 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 03:29:10.279122 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: ---------------------------------------------------- Apr 30 03:29:10.279122 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: ntp-4 is maintained by Network Time Foundation, Apr 30 03:29:10.279122 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 03:29:10.279122 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: corporation. Support and training for ntp-4 are Apr 30 03:29:10.279122 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: available at https://www.nwtime.org/support Apr 30 03:29:10.279122 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: ---------------------------------------------------- Apr 30 03:29:10.279694 update_engine[2077]: I20250430 03:29:10.271692 2077 main.cc:92] Flatcar Update Engine starting Apr 30 03:29:10.279694 update_engine[2077]: I20250430 03:29:10.278925 2077 update_check_scheduler.cc:74] Next update check in 9m11s Apr 30 03:29:10.276142 ntpd[2056]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:23 UTC 2025 (1): Starting Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.273 INFO Fetch successful Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.273 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.274 INFO Fetch failed with 404: resource not found Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.274 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.275 INFO Fetch successful Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.275 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.275 INFO Fetch successful Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.275 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.277 INFO Fetch successful Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.277 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.277 INFO Fetch successful Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.278 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 30 03:29:10.280220 coreos-metadata[2045]: Apr 30 03:29:10.279 INFO Fetch successful Apr 30 03:29:10.277179 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:29:10.276166 ntpd[2056]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 03:29:10.276177 ntpd[2056]: ---------------------------------------------------- Apr 30 03:29:10.276187 ntpd[2056]: ntp-4 is maintained by Network Time Foundation, Apr 30 03:29:10.276201 ntpd[2056]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 03:29:10.276211 ntpd[2056]: corporation. Support and training for ntp-4 are Apr 30 03:29:10.276221 ntpd[2056]: available at https://www.nwtime.org/support Apr 30 03:29:10.276231 ntpd[2056]: ---------------------------------------------------- Apr 30 03:29:10.290067 ntpd[2056]: proto: precision = 0.070 usec (-24) Apr 30 03:29:10.291538 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: proto: precision = 0.070 usec (-24) Apr 30 03:29:10.291538 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: basedate set to 2025-04-17 Apr 30 03:29:10.291538 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: gps base set to 2025-04-20 (week 2363) Apr 30 03:29:10.290403 ntpd[2056]: basedate set to 2025-04-17 Apr 30 03:29:10.290419 ntpd[2056]: gps base set to 2025-04-20 (week 2363) Apr 30 03:29:10.292302 jq[2088]: true Apr 30 03:29:10.293477 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:29:10.297597 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:29:10.320136 ntpd[2056]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 03:29:10.320307 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 03:29:10.320427 ntpd[2056]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 03:29:10.320895 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 03:29:10.320895 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 03:29:10.320895 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: Listen normally on 3 eth0 172.31.17.153:123 Apr 30 03:29:10.320895 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: Listen normally on 4 lo [::1]:123 Apr 30 03:29:10.320738 ntpd[2056]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 03:29:10.320799 ntpd[2056]: Listen normally on 3 eth0 172.31.17.153:123 Apr 30 03:29:10.320844 ntpd[2056]: Listen normally on 4 lo [::1]:123 Apr 30 03:29:10.321699 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: Listen normally on 5 eth0 [fe80::4b3:45ff:febd:b7a1%2]:123 Apr 30 03:29:10.321699 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: Listening on routing socket on fd #22 for interface updates Apr 30 03:29:10.321298 ntpd[2056]: Listen normally on 5 eth0 [fe80::4b3:45ff:febd:b7a1%2]:123 Apr 30 03:29:10.321351 ntpd[2056]: Listening on routing socket on fd #22 for interface updates Apr 30 03:29:10.323122 ntpd[2056]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:29:10.323637 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:29:10.323637 ntpd[2056]: 30 Apr 03:29:10 ntpd[2056]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:29:10.323153 ntpd[2056]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:29:10.359352 (ntainerd)[2102]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:29:10.405936 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:29:10.407125 dbus-daemon[2046]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 03:29:10.413339 jq[2100]: true Apr 30 03:29:10.428043 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 30 03:29:10.415635 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:29:10.415672 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:29:10.458984 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 03:29:10.460917 extend-filesystems[2079]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 30 03:29:10.460917 extend-filesystems[2079]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 03:29:10.460917 extend-filesystems[2079]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 30 03:29:10.478211 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1657) Apr 30 03:29:10.478255 extend-filesystems[2049]: Resized filesystem in /dev/nvme0n1p9 Apr 30 03:29:10.465507 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:29:10.465558 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:29:10.469619 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:29:10.480517 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:29:10.489223 systemd-logind[2074]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:29:10.489251 systemd-logind[2074]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 30 03:29:10.489277 systemd-logind[2074]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:29:10.491411 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:29:10.491712 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:29:10.494594 systemd-logind[2074]: New seat seat0. Apr 30 03:29:10.540050 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:29:10.543600 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:29:10.549885 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 03:29:10.599179 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 30 03:29:10.603197 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:29:10.730362 locksmithd[2133]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:29:10.755811 bash[2228]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:29:10.751376 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:29:10.772245 systemd[1]: Starting sshkeys.service... Apr 30 03:29:10.837880 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:29:10.847227 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:29:10.943784 dbus-daemon[2046]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 03:29:10.943983 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 03:29:10.946705 dbus-daemon[2046]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2132 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 03:29:10.960889 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 03:29:10.982244 coreos-metadata[2250]: Apr 30 03:29:10.981 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 03:29:10.987965 coreos-metadata[2250]: Apr 30 03:29:10.987 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 30 03:29:10.994840 coreos-metadata[2250]: Apr 30 03:29:10.988 INFO Fetch successful Apr 30 03:29:10.994840 coreos-metadata[2250]: Apr 30 03:29:10.989 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 03:29:10.995708 coreos-metadata[2250]: Apr 30 03:29:10.995 INFO Fetch successful Apr 30 03:29:11.001871 unknown[2250]: wrote ssh authorized keys file for user: core Apr 30 03:29:11.010641 amazon-ssm-agent[2178]: Initializing new seelog logger Apr 30 03:29:11.011233 amazon-ssm-agent[2178]: New Seelog Logger Creation Complete Apr 30 03:29:11.012131 amazon-ssm-agent[2178]: 2025/04/30 03:29:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:29:11.012131 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:29:11.012131 amazon-ssm-agent[2178]: 2025/04/30 03:29:11 processing appconfig overrides Apr 30 03:29:11.016623 amazon-ssm-agent[2178]: 2025/04/30 03:29:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:29:11.019394 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:29:11.019394 amazon-ssm-agent[2178]: 2025/04/30 03:29:11 processing appconfig overrides Apr 30 03:29:11.019394 amazon-ssm-agent[2178]: 2025/04/30 03:29:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:29:11.019394 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:29:11.019394 amazon-ssm-agent[2178]: 2025/04/30 03:29:11 processing appconfig overrides Apr 30 03:29:11.025854 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO Proxy environment variables: Apr 30 03:29:11.034622 polkitd[2261]: Started polkitd version 121 Apr 30 03:29:11.036888 amazon-ssm-agent[2178]: 2025/04/30 03:29:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:29:11.036888 amazon-ssm-agent[2178]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:29:11.036888 amazon-ssm-agent[2178]: 2025/04/30 03:29:11 processing appconfig overrides Apr 30 03:29:11.047868 polkitd[2261]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 03:29:11.047981 polkitd[2261]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 03:29:11.048822 polkitd[2261]: Finished loading, compiling and executing 2 rules Apr 30 03:29:11.049585 dbus-daemon[2046]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 03:29:11.049911 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 03:29:11.053579 polkitd[2261]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 03:29:11.066786 update-ssh-keys[2268]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:29:11.068212 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:29:11.085400 systemd[1]: Finished sshkeys.service. Apr 30 03:29:11.120979 systemd-hostnamed[2132]: Hostname set to (transient) Apr 30 03:29:11.121658 systemd-resolved[1990]: System hostname changed to 'ip-172-31-17-153'. Apr 30 03:29:11.130053 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO no_proxy: Apr 30 03:29:11.230482 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO https_proxy: Apr 30 03:29:11.318610 containerd[2102]: time="2025-04-30T03:29:11.318509226Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:29:11.330771 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO http_proxy: Apr 30 03:29:11.413094 containerd[2102]: time="2025-04-30T03:29:11.412779425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:29:11.415294 containerd[2102]: time="2025-04-30T03:29:11.415158632Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:29:11.415294 containerd[2102]: time="2025-04-30T03:29:11.415221143Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:29:11.415294 containerd[2102]: time="2025-04-30T03:29:11.415245416Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:29:11.415944 containerd[2102]: time="2025-04-30T03:29:11.415694379Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:29:11.415944 containerd[2102]: time="2025-04-30T03:29:11.415722097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:29:11.415944 containerd[2102]: time="2025-04-30T03:29:11.415856532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:29:11.415944 containerd[2102]: time="2025-04-30T03:29:11.415875239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:29:11.416592 containerd[2102]: time="2025-04-30T03:29:11.416446119Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:29:11.416592 containerd[2102]: time="2025-04-30T03:29:11.416474881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:29:11.416592 containerd[2102]: time="2025-04-30T03:29:11.416517914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:29:11.416592 containerd[2102]: time="2025-04-30T03:29:11.416538116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:29:11.417148 containerd[2102]: time="2025-04-30T03:29:11.416942202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:29:11.417632 containerd[2102]: time="2025-04-30T03:29:11.417453222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:29:11.417881 containerd[2102]: time="2025-04-30T03:29:11.417858491Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:29:11.418036 containerd[2102]: time="2025-04-30T03:29:11.417930721Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:29:11.418239 containerd[2102]: time="2025-04-30T03:29:11.418144544Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:29:11.418239 containerd[2102]: time="2025-04-30T03:29:11.418220187Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:29:11.424115 containerd[2102]: time="2025-04-30T03:29:11.423635781Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:29:11.424115 containerd[2102]: time="2025-04-30T03:29:11.423713043Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:29:11.424115 containerd[2102]: time="2025-04-30T03:29:11.423738200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:29:11.424115 containerd[2102]: time="2025-04-30T03:29:11.423807368Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:29:11.424115 containerd[2102]: time="2025-04-30T03:29:11.423831570Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:29:11.424115 containerd[2102]: time="2025-04-30T03:29:11.424005128Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.424795755Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.424934736Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.424964913Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.424985979Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.425006129Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.425025726Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.425044029Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.425065049Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.425088888Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.425108262Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.425126889Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.425145328Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.425173080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426408 containerd[2102]: time="2025-04-30T03:29:11.425194052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425213241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425233366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425251259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425270964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425289106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425308550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425327295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425354536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425378023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425395562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425417170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425439438Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425468840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425487133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.426996 containerd[2102]: time="2025-04-30T03:29:11.425503767Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:29:11.427590 containerd[2102]: time="2025-04-30T03:29:11.425553050Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:29:11.427590 containerd[2102]: time="2025-04-30T03:29:11.425576034Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:29:11.427590 containerd[2102]: time="2025-04-30T03:29:11.425592962Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:29:11.427590 containerd[2102]: time="2025-04-30T03:29:11.425610747Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:29:11.427590 containerd[2102]: time="2025-04-30T03:29:11.425625621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.427590 containerd[2102]: time="2025-04-30T03:29:11.425644150Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:29:11.427590 containerd[2102]: time="2025-04-30T03:29:11.425658598Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:29:11.427590 containerd[2102]: time="2025-04-30T03:29:11.425674038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:29:11.429765 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO Checking if agent identity type OnPrem can be assumed Apr 30 03:29:11.431518 containerd[2102]: time="2025-04-30T03:29:11.430231101Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:29:11.431518 containerd[2102]: time="2025-04-30T03:29:11.430328076Z" level=info msg="Connect containerd service" Apr 30 03:29:11.431518 containerd[2102]: time="2025-04-30T03:29:11.430393681Z" level=info msg="using legacy CRI server" Apr 30 03:29:11.431518 containerd[2102]: time="2025-04-30T03:29:11.430404395Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:29:11.431518 containerd[2102]: time="2025-04-30T03:29:11.430535141Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:29:11.431518 containerd[2102]: time="2025-04-30T03:29:11.431241557Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:29:11.433956 containerd[2102]: time="2025-04-30T03:29:11.433906416Z" level=info msg="Start subscribing containerd event" Apr 30 03:29:11.435735 containerd[2102]: time="2025-04-30T03:29:11.434247481Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:29:11.435735 containerd[2102]: time="2025-04-30T03:29:11.434315011Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:29:11.435735 containerd[2102]: time="2025-04-30T03:29:11.434360531Z" level=info msg="Start recovering state" Apr 30 03:29:11.435735 containerd[2102]: time="2025-04-30T03:29:11.434437197Z" level=info msg="Start event monitor" Apr 30 03:29:11.435735 containerd[2102]: time="2025-04-30T03:29:11.434455859Z" level=info msg="Start snapshots syncer" Apr 30 03:29:11.435735 containerd[2102]: time="2025-04-30T03:29:11.434470837Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:29:11.435735 containerd[2102]: time="2025-04-30T03:29:11.434481159Z" level=info msg="Start streaming server" Apr 30 03:29:11.434691 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:29:11.438783 containerd[2102]: time="2025-04-30T03:29:11.437822431Z" level=info msg="containerd successfully booted in 0.122476s" Apr 30 03:29:11.526178 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO Checking if agent identity type EC2 can be assumed Apr 30 03:29:11.625573 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO Agent will take identity from EC2 Apr 30 03:29:11.716995 sshd_keygen[2105]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:29:11.724896 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:29:11.759305 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:29:11.773064 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:29:11.785124 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:29:11.785471 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:29:11.799315 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:29:11.824354 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:29:11.832333 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:29:11.846507 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:29:11.856217 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:29:11.858403 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:29:11.923273 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:29:11.923273 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 30 03:29:11.923273 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 30 03:29:11.923273 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [amazon-ssm-agent] Starting Core Agent Apr 30 03:29:11.923273 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 30 03:29:11.923273 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [Registrar] Starting registrar module Apr 30 03:29:11.923273 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 30 03:29:11.923803 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [EC2Identity] EC2 registration was successful. Apr 30 03:29:11.923803 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [CredentialRefresher] credentialRefresher has started Apr 30 03:29:11.923803 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [CredentialRefresher] Starting credentials refresher loop Apr 30 03:29:11.923803 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 30 03:29:11.923803 amazon-ssm-agent[2178]: 2025-04-30 03:29:11 INFO [CredentialRefresher] Next credential rotation will be in 30.166657663 minutes Apr 30 03:29:12.942458 amazon-ssm-agent[2178]: 2025-04-30 03:29:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 30 03:29:12.947052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:12.950051 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:29:12.951332 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:29:12.951905 systemd[1]: Startup finished in 6.143s (kernel) + 6.769s (userspace) = 12.913s. Apr 30 03:29:13.045162 amazon-ssm-agent[2178]: 2025-04-30 03:29:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2318) started Apr 30 03:29:13.146383 amazon-ssm-agent[2178]: 2025-04-30 03:29:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 30 03:29:14.157858 kubelet[2317]: E0430 03:29:14.157718 2317 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:29:14.160527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:29:14.161774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:29:14.558584 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:29:14.570182 systemd[1]: Started sshd@0-172.31.17.153:22-147.75.109.163:57768.service - OpenSSH per-connection server daemon (147.75.109.163:57768). Apr 30 03:29:14.823106 sshd[2341]: Accepted publickey for core from 147.75.109.163 port 57768 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:14.825215 sshd[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:14.833519 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:29:14.840071 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:29:14.843000 systemd-logind[2074]: New session 1 of user core. Apr 30 03:29:14.853746 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:29:14.862164 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:29:14.865523 (systemd)[2347]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:29:14.990478 systemd[2347]: Queued start job for default target default.target. Apr 30 03:29:14.990882 systemd[2347]: Created slice app.slice - User Application Slice. Apr 30 03:29:14.990906 systemd[2347]: Reached target paths.target - Paths. Apr 30 03:29:14.990918 systemd[2347]: Reached target timers.target - Timers. Apr 30 03:29:14.994861 systemd[2347]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:29:15.006806 systemd[2347]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:29:15.006873 systemd[2347]: Reached target sockets.target - Sockets. Apr 30 03:29:15.006887 systemd[2347]: Reached target basic.target - Basic System. Apr 30 03:29:15.006929 systemd[2347]: Reached target default.target - Main User Target. Apr 30 03:29:15.006957 systemd[2347]: Startup finished in 134ms. Apr 30 03:29:15.007174 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:29:15.013089 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:29:15.217174 systemd[1]: Started sshd@1-172.31.17.153:22-147.75.109.163:57780.service - OpenSSH per-connection server daemon (147.75.109.163:57780). Apr 30 03:29:15.458725 sshd[2359]: Accepted publickey for core from 147.75.109.163 port 57780 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:15.460463 sshd[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:15.465250 systemd-logind[2074]: New session 2 of user core. Apr 30 03:29:15.475120 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:29:15.650317 sshd[2359]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:15.653073 systemd[1]: sshd@1-172.31.17.153:22-147.75.109.163:57780.service: Deactivated successfully. Apr 30 03:29:15.656593 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:29:15.657521 systemd-logind[2074]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:29:15.658534 systemd-logind[2074]: Removed session 2. Apr 30 03:29:15.693146 systemd[1]: Started sshd@2-172.31.17.153:22-147.75.109.163:57796.service - OpenSSH per-connection server daemon (147.75.109.163:57796). Apr 30 03:29:15.937374 sshd[2367]: Accepted publickey for core from 147.75.109.163 port 57796 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:15.939154 sshd[2367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:15.944569 systemd-logind[2074]: New session 3 of user core. Apr 30 03:29:15.949365 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:29:16.125515 sshd[2367]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:16.129293 systemd[1]: sshd@2-172.31.17.153:22-147.75.109.163:57796.service: Deactivated successfully. Apr 30 03:29:16.132091 systemd-logind[2074]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:29:16.132651 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:29:16.133688 systemd-logind[2074]: Removed session 3. Apr 30 03:29:16.167074 systemd[1]: Started sshd@3-172.31.17.153:22-147.75.109.163:60886.service - OpenSSH per-connection server daemon (147.75.109.163:60886). Apr 30 03:29:16.413440 sshd[2375]: Accepted publickey for core from 147.75.109.163 port 60886 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:16.414837 sshd[2375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:16.420100 systemd-logind[2074]: New session 4 of user core. Apr 30 03:29:16.426128 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:29:16.605935 sshd[2375]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:16.609200 systemd[1]: sshd@3-172.31.17.153:22-147.75.109.163:60886.service: Deactivated successfully. Apr 30 03:29:16.612331 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:29:16.613066 systemd-logind[2074]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:29:16.613867 systemd-logind[2074]: Removed session 4. Apr 30 03:29:16.647059 systemd[1]: Started sshd@4-172.31.17.153:22-147.75.109.163:60890.service - OpenSSH per-connection server daemon (147.75.109.163:60890). Apr 30 03:29:16.888794 sshd[2383]: Accepted publickey for core from 147.75.109.163 port 60890 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:16.889837 sshd[2383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:16.895197 systemd-logind[2074]: New session 5 of user core. Apr 30 03:29:16.901093 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:29:17.056938 sudo[2387]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:29:17.057236 sudo[2387]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:29:17.070332 sudo[2387]: pam_unix(sudo:session): session closed for user root Apr 30 03:29:17.107573 sshd[2383]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:17.111212 systemd[1]: sshd@4-172.31.17.153:22-147.75.109.163:60890.service: Deactivated successfully. Apr 30 03:29:17.114976 systemd-logind[2074]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:29:17.116037 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:29:17.118925 systemd-logind[2074]: Removed session 5. Apr 30 03:29:17.157427 systemd[1]: Started sshd@5-172.31.17.153:22-147.75.109.163:60906.service - OpenSSH per-connection server daemon (147.75.109.163:60906). Apr 30 03:29:18.371755 systemd-resolved[1990]: Clock change detected. Flushing caches. Apr 30 03:29:18.496198 sshd[2392]: Accepted publickey for core from 147.75.109.163 port 60906 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:18.497691 sshd[2392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:18.502587 systemd-logind[2074]: New session 6 of user core. Apr 30 03:29:18.508177 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:29:18.652235 sudo[2397]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:29:18.652529 sudo[2397]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:29:18.656533 sudo[2397]: pam_unix(sudo:session): session closed for user root Apr 30 03:29:18.662203 sudo[2396]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:29:18.662540 sudo[2396]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:29:18.675239 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:29:18.695245 auditctl[2400]: No rules Apr 30 03:29:18.695804 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:29:18.696158 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:29:18.702027 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:29:18.733870 augenrules[2419]: No rules Apr 30 03:29:18.735681 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:29:18.738934 sudo[2396]: pam_unix(sudo:session): session closed for user root Apr 30 03:29:18.776557 sshd[2392]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:18.779921 systemd[1]: sshd@5-172.31.17.153:22-147.75.109.163:60906.service: Deactivated successfully. Apr 30 03:29:18.782054 systemd-logind[2074]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:29:18.783721 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:29:18.784963 systemd-logind[2074]: Removed session 6. Apr 30 03:29:18.820065 systemd[1]: Started sshd@6-172.31.17.153:22-147.75.109.163:60922.service - OpenSSH per-connection server daemon (147.75.109.163:60922). Apr 30 03:29:19.060826 sshd[2428]: Accepted publickey for core from 147.75.109.163 port 60922 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:29:19.062400 sshd[2428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:19.068156 systemd-logind[2074]: New session 7 of user core. Apr 30 03:29:19.074032 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:29:19.215086 sudo[2432]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:29:19.215376 sudo[2432]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:29:20.372898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:20.385984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:20.412680 systemd[1]: Reloading requested from client PID 2471 ('systemctl') (unit session-7.scope)... Apr 30 03:29:20.412698 systemd[1]: Reloading... Apr 30 03:29:20.529668 zram_generator::config[2511]: No configuration found. Apr 30 03:29:20.688077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:20.773946 systemd[1]: Reloading finished in 360 ms. Apr 30 03:29:20.817352 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:29:20.817464 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:29:20.817844 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:20.821990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:21.038860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:21.047186 (kubelet)[2583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:29:21.100753 kubelet[2583]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:21.100753 kubelet[2583]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:29:21.100753 kubelet[2583]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:21.101248 kubelet[2583]: I0430 03:29:21.100822 2583 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:29:21.295771 kubelet[2583]: I0430 03:29:21.295669 2583 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:29:21.295771 kubelet[2583]: I0430 03:29:21.295698 2583 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:29:21.296322 kubelet[2583]: I0430 03:29:21.295965 2583 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:29:21.324203 kubelet[2583]: I0430 03:29:21.324155 2583 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:21.337528 kubelet[2583]: I0430 03:29:21.337490 2583 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:29:21.338447 kubelet[2583]: I0430 03:29:21.337944 2583 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:29:21.338447 kubelet[2583]: I0430 03:29:21.337974 2583 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.17.153","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:29:21.339502 kubelet[2583]: I0430 03:29:21.339440 2583 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:29:21.339502 kubelet[2583]: I0430 03:29:21.339468 2583 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:29:21.339836 kubelet[2583]: I0430 03:29:21.339816 2583 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:21.340997 kubelet[2583]: I0430 03:29:21.340980 2583 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:29:21.340997 kubelet[2583]: I0430 03:29:21.340999 2583 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:29:21.341088 kubelet[2583]: I0430 03:29:21.341022 2583 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:29:21.341088 kubelet[2583]: I0430 03:29:21.341039 2583 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:29:21.341497 kubelet[2583]: E0430 03:29:21.341476 2583 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:21.341688 kubelet[2583]: E0430 03:29:21.341662 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:21.346381 kubelet[2583]: I0430 03:29:21.346190 2583 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:29:21.348100 kubelet[2583]: I0430 03:29:21.348048 2583 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:29:21.348199 kubelet[2583]: W0430 03:29:21.348135 2583 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:29:21.349027 kubelet[2583]: I0430 03:29:21.348702 2583 server.go:1264] "Started kubelet" Apr 30 03:29:21.349082 kubelet[2583]: I0430 03:29:21.349026 2583 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:29:21.350404 kubelet[2583]: I0430 03:29:21.349992 2583 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:29:21.356700 kubelet[2583]: I0430 03:29:21.356602 2583 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:29:21.357193 kubelet[2583]: I0430 03:29:21.356881 2583 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:29:21.357193 kubelet[2583]: I0430 03:29:21.357071 2583 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:29:21.357843 kubelet[2583]: W0430 03:29:21.357822 2583 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Apr 30 03:29:21.357930 kubelet[2583]: E0430 03:29:21.357850 2583 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Apr 30 03:29:21.357930 kubelet[2583]: W0430 03:29:21.357892 2583 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.17.153" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Apr 30 03:29:21.357930 kubelet[2583]: E0430 03:29:21.357902 2583 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.17.153" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Apr 30 03:29:21.363343 kubelet[2583]: E0430 03:29:21.363134 2583 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Apr 30 03:29:21.363343 kubelet[2583]: I0430 03:29:21.363183 2583 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:29:21.363343 kubelet[2583]: I0430 03:29:21.363279 2583 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:29:21.363343 kubelet[2583]: I0430 03:29:21.363341 2583 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:29:21.364881 kubelet[2583]: E0430 03:29:21.364418 2583 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:29:21.365999 kubelet[2583]: I0430 03:29:21.365156 2583 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:29:21.365999 kubelet[2583]: I0430 03:29:21.365271 2583 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:29:21.367510 kubelet[2583]: I0430 03:29:21.367460 2583 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:29:21.384214 kubelet[2583]: E0430 03:29:21.384128 2583 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.17.153\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 30 03:29:21.391848 kubelet[2583]: E0430 03:29:21.384660 2583 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.153.183afafaf6c9c5df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.153,UID:172.31.17.153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.17.153,},FirstTimestamp:2025-04-30 03:29:21.348675039 +0000 UTC m=+0.297203904,LastTimestamp:2025-04-30 03:29:21.348675039 +0000 UTC m=+0.297203904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.153,}" Apr 30 03:29:21.392303 kubelet[2583]: W0430 03:29:21.384877 2583 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Apr 30 03:29:21.392468 kubelet[2583]: E0430 03:29:21.392454 2583 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Apr 30 03:29:21.395154 kubelet[2583]: E0430 03:29:21.395044 2583 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.153.183afafaf7b9cc01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.153,UID:172.31.17.153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.17.153,},FirstTimestamp:2025-04-30 03:29:21.364405249 +0000 UTC m=+0.312934116,LastTimestamp:2025-04-30 03:29:21.364405249 +0000 UTC m=+0.312934116,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.153,}" Apr 30 03:29:21.403316 kubelet[2583]: I0430 03:29:21.403294 2583 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:29:21.403478 kubelet[2583]: I0430 03:29:21.403468 2583 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:29:21.403619 kubelet[2583]: I0430 03:29:21.403545 2583 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:21.411983 kubelet[2583]: I0430 03:29:21.411718 2583 policy_none.go:49] "None policy: Start" Apr 30 03:29:21.416920 kubelet[2583]: I0430 03:29:21.416850 2583 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:29:21.416920 kubelet[2583]: I0430 03:29:21.416882 2583 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:29:21.429127 kubelet[2583]: I0430 03:29:21.425871 2583 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:29:21.429127 kubelet[2583]: I0430 03:29:21.426102 2583 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:29:21.432776 kubelet[2583]: I0430 03:29:21.432753 2583 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:29:21.439392 kubelet[2583]: E0430 03:29:21.439247 2583 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.17.153\" not found" Apr 30 03:29:21.464736 kubelet[2583]: I0430 03:29:21.464697 2583 kubelet_node_status.go:73] "Attempting to register node" node="172.31.17.153" Apr 30 03:29:21.468347 kubelet[2583]: I0430 03:29:21.468210 2583 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:29:21.470129 kubelet[2583]: I0430 03:29:21.470078 2583 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:29:21.470129 kubelet[2583]: I0430 03:29:21.470109 2583 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:29:21.470129 kubelet[2583]: I0430 03:29:21.470137 2583 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:29:21.470425 kubelet[2583]: E0430 03:29:21.470187 2583 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 30 03:29:21.475079 kubelet[2583]: I0430 03:29:21.474871 2583 kubelet_node_status.go:76] "Successfully registered node" node="172.31.17.153" Apr 30 03:29:21.493217 kubelet[2583]: E0430 03:29:21.493161 2583 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Apr 30 03:29:21.560710 sudo[2432]: pam_unix(sudo:session): session closed for user root Apr 30 03:29:21.593947 kubelet[2583]: E0430 03:29:21.593900 2583 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Apr 30 03:29:21.597786 sshd[2428]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:21.602047 systemd[1]: sshd@6-172.31.17.153:22-147.75.109.163:60922.service: Deactivated successfully. Apr 30 03:29:21.607181 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:29:21.608146 systemd-logind[2074]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:29:21.609349 systemd-logind[2074]: Removed session 7. Apr 30 03:29:21.695011 kubelet[2583]: E0430 03:29:21.694943 2583 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Apr 30 03:29:21.795613 kubelet[2583]: E0430 03:29:21.795573 2583 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Apr 30 03:29:21.896802 kubelet[2583]: E0430 03:29:21.896683 2583 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Apr 30 03:29:21.997662 kubelet[2583]: E0430 03:29:21.997612 2583 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Apr 30 03:29:22.098501 kubelet[2583]: E0430 03:29:22.098437 2583 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Apr 30 03:29:22.199255 kubelet[2583]: E0430 03:29:22.199138 2583 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Apr 30 03:29:22.297945 kubelet[2583]: I0430 03:29:22.297898 2583 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 30 03:29:22.298119 kubelet[2583]: W0430 03:29:22.298099 2583 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Apr 30 03:29:22.300129 kubelet[2583]: E0430 03:29:22.300095 2583 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Apr 30 03:29:22.342541 kubelet[2583]: E0430 03:29:22.342498 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:22.400289 kubelet[2583]: E0430 03:29:22.400253 2583 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Apr 30 03:29:22.501427 kubelet[2583]: E0430 03:29:22.501298 2583 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.153\" not found" Apr 30 03:29:22.603766 kubelet[2583]: I0430 03:29:22.603582 2583 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Apr 30 03:29:22.604197 containerd[2102]: time="2025-04-30T03:29:22.604148745Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:29:22.604717 kubelet[2583]: I0430 03:29:22.604382 2583 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Apr 30 03:29:23.342681 kubelet[2583]: I0430 03:29:23.342575 2583 apiserver.go:52] "Watching apiserver" Apr 30 03:29:23.342681 kubelet[2583]: E0430 03:29:23.342608 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:23.349758 kubelet[2583]: I0430 03:29:23.349470 2583 topology_manager.go:215] "Topology Admit Handler" podUID="69369dcc-5bb6-4835-83b2-b49f1ef80401" podNamespace="calico-system" podName="csi-node-driver-frgxn" Apr 30 03:29:23.351489 kubelet[2583]: I0430 03:29:23.349847 2583 topology_manager.go:215] "Topology Admit Handler" podUID="ac5ef954-f243-4e52-85b6-a8e87e2d8d83" podNamespace="kube-system" podName="kube-proxy-gtthh" Apr 30 03:29:23.351489 kubelet[2583]: I0430 03:29:23.349941 2583 topology_manager.go:215] "Topology Admit Handler" podUID="949bfb9b-cd44-4ffb-981c-65f67bb0ba84" podNamespace="calico-system" podName="calico-node-6pvzz" Apr 30 03:29:23.351489 kubelet[2583]: E0430 03:29:23.350033 2583 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frgxn" podUID="69369dcc-5bb6-4835-83b2-b49f1ef80401" Apr 30 03:29:23.365269 kubelet[2583]: I0430 03:29:23.365228 2583 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:29:23.376068 kubelet[2583]: I0430 03:29:23.376004 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-lib-modules\") pod \"calico-node-6pvzz\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " pod="calico-system/calico-node-6pvzz" Apr 30 03:29:23.376068 kubelet[2583]: I0430 03:29:23.376050 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac5ef954-f243-4e52-85b6-a8e87e2d8d83-kube-proxy\") pod \"kube-proxy-gtthh\" (UID: \"ac5ef954-f243-4e52-85b6-a8e87e2d8d83\") " pod="kube-system/kube-proxy-gtthh" Apr 30 03:29:23.376068 kubelet[2583]: I0430 03:29:23.376070 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac5ef954-f243-4e52-85b6-a8e87e2d8d83-xtables-lock\") pod \"kube-proxy-gtthh\" (UID: \"ac5ef954-f243-4e52-85b6-a8e87e2d8d83\") " pod="kube-system/kube-proxy-gtthh" Apr 30 03:29:23.376272 kubelet[2583]: I0430 03:29:23.376088 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzs49\" (UniqueName: \"kubernetes.io/projected/ac5ef954-f243-4e52-85b6-a8e87e2d8d83-kube-api-access-rzs49\") pod \"kube-proxy-gtthh\" (UID: \"ac5ef954-f243-4e52-85b6-a8e87e2d8d83\") " pod="kube-system/kube-proxy-gtthh" Apr 30 03:29:23.376272 kubelet[2583]: I0430 03:29:23.376104 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-xtables-lock\") pod \"calico-node-6pvzz\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " pod="calico-system/calico-node-6pvzz" Apr 30 03:29:23.376272 kubelet[2583]: I0430 03:29:23.376118 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-cni-net-dir\") pod \"calico-node-6pvzz\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " pod="calico-system/calico-node-6pvzz" Apr 30 03:29:23.376272 kubelet[2583]: I0430 03:29:23.376132 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/69369dcc-5bb6-4835-83b2-b49f1ef80401-socket-dir\") pod \"csi-node-driver-frgxn\" (UID: \"69369dcc-5bb6-4835-83b2-b49f1ef80401\") " pod="calico-system/csi-node-driver-frgxn" Apr 30 03:29:23.376272 kubelet[2583]: I0430 03:29:23.376146 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac5ef954-f243-4e52-85b6-a8e87e2d8d83-lib-modules\") pod \"kube-proxy-gtthh\" (UID: \"ac5ef954-f243-4e52-85b6-a8e87e2d8d83\") " pod="kube-system/kube-proxy-gtthh" Apr 30 03:29:23.376391 kubelet[2583]: I0430 03:29:23.376161 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-policysync\") pod \"calico-node-6pvzz\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " pod="calico-system/calico-node-6pvzz" Apr 30 03:29:23.376391 kubelet[2583]: I0430 03:29:23.376176 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-var-run-calico\") pod \"calico-node-6pvzz\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " pod="calico-system/calico-node-6pvzz" Apr 30 03:29:23.376391 kubelet[2583]: I0430 03:29:23.376189 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-var-lib-calico\") pod \"calico-node-6pvzz\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " pod="calico-system/calico-node-6pvzz" Apr 30 03:29:23.376391 kubelet[2583]: I0430 03:29:23.376207 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-cni-bin-dir\") pod \"calico-node-6pvzz\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " pod="calico-system/calico-node-6pvzz" Apr 30 03:29:23.376391 kubelet[2583]: I0430 03:29:23.376220 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-cni-log-dir\") pod \"calico-node-6pvzz\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " pod="calico-system/calico-node-6pvzz" Apr 30 03:29:23.376501 kubelet[2583]: I0430 03:29:23.376235 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/69369dcc-5bb6-4835-83b2-b49f1ef80401-varrun\") pod \"csi-node-driver-frgxn\" (UID: \"69369dcc-5bb6-4835-83b2-b49f1ef80401\") " pod="calico-system/csi-node-driver-frgxn" Apr 30 03:29:23.376501 kubelet[2583]: I0430 03:29:23.376249 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-tigera-ca-bundle\") pod \"calico-node-6pvzz\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " pod="calico-system/calico-node-6pvzz" Apr 30 03:29:23.376501 kubelet[2583]: I0430 03:29:23.376263 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-node-certs\") pod \"calico-node-6pvzz\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " pod="calico-system/calico-node-6pvzz" Apr 30 03:29:23.376501 kubelet[2583]: I0430 03:29:23.376279 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-flexvol-driver-host\") pod \"calico-node-6pvzz\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " pod="calico-system/calico-node-6pvzz" Apr 30 03:29:23.376501 kubelet[2583]: I0430 03:29:23.376299 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km7gc\" (UniqueName: \"kubernetes.io/projected/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-kube-api-access-km7gc\") pod \"calico-node-6pvzz\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " pod="calico-system/calico-node-6pvzz" Apr 30 03:29:23.376617 kubelet[2583]: I0430 03:29:23.376314 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69369dcc-5bb6-4835-83b2-b49f1ef80401-kubelet-dir\") pod \"csi-node-driver-frgxn\" (UID: \"69369dcc-5bb6-4835-83b2-b49f1ef80401\") " pod="calico-system/csi-node-driver-frgxn" Apr 30 03:29:23.376617 kubelet[2583]: I0430 03:29:23.376352 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/69369dcc-5bb6-4835-83b2-b49f1ef80401-registration-dir\") pod \"csi-node-driver-frgxn\" (UID: \"69369dcc-5bb6-4835-83b2-b49f1ef80401\") " pod="calico-system/csi-node-driver-frgxn" Apr 30 03:29:23.376617 kubelet[2583]: I0430 03:29:23.376368 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs855\" (UniqueName: \"kubernetes.io/projected/69369dcc-5bb6-4835-83b2-b49f1ef80401-kube-api-access-hs855\") pod \"csi-node-driver-frgxn\" (UID: \"69369dcc-5bb6-4835-83b2-b49f1ef80401\") " pod="calico-system/csi-node-driver-frgxn" Apr 30 03:29:23.479104 kubelet[2583]: E0430 03:29:23.478508 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.479104 kubelet[2583]: W0430 03:29:23.478532 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.479104 kubelet[2583]: E0430 03:29:23.478551 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.479104 kubelet[2583]: E0430 03:29:23.478777 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.479104 kubelet[2583]: W0430 03:29:23.478785 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.479104 kubelet[2583]: E0430 03:29:23.478798 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.479104 kubelet[2583]: E0430 03:29:23.478943 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.479104 kubelet[2583]: W0430 03:29:23.478949 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.479104 kubelet[2583]: E0430 03:29:23.478968 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.479399 kubelet[2583]: E0430 03:29:23.479141 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.479399 kubelet[2583]: W0430 03:29:23.479147 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.479399 kubelet[2583]: E0430 03:29:23.479165 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.480817 kubelet[2583]: E0430 03:29:23.479799 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.480817 kubelet[2583]: W0430 03:29:23.479813 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.480817 kubelet[2583]: E0430 03:29:23.480245 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.480817 kubelet[2583]: E0430 03:29:23.480595 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.480817 kubelet[2583]: W0430 03:29:23.480602 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.480817 kubelet[2583]: E0430 03:29:23.480619 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.481016 kubelet[2583]: E0430 03:29:23.480967 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.481016 kubelet[2583]: W0430 03:29:23.480975 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.481016 kubelet[2583]: E0430 03:29:23.480998 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.481201 kubelet[2583]: E0430 03:29:23.481189 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.481201 kubelet[2583]: W0430 03:29:23.481200 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.481464 kubelet[2583]: E0430 03:29:23.481450 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.481464 kubelet[2583]: W0430 03:29:23.481464 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.481536 kubelet[2583]: E0430 03:29:23.481475 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.485669 kubelet[2583]: E0430 03:29:23.481666 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.485669 kubelet[2583]: W0430 03:29:23.481675 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.485669 kubelet[2583]: E0430 03:29:23.481684 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.485669 kubelet[2583]: E0430 03:29:23.481742 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.485669 kubelet[2583]: E0430 03:29:23.482195 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.485669 kubelet[2583]: W0430 03:29:23.482203 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.485669 kubelet[2583]: E0430 03:29:23.482212 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.485669 kubelet[2583]: E0430 03:29:23.482412 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.485669 kubelet[2583]: W0430 03:29:23.482419 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.485669 kubelet[2583]: E0430 03:29:23.482603 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.485978 kubelet[2583]: W0430 03:29:23.482610 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.485978 kubelet[2583]: E0430 03:29:23.482619 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.485978 kubelet[2583]: E0430 03:29:23.482786 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.485978 kubelet[2583]: W0430 03:29:23.482792 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.485978 kubelet[2583]: E0430 03:29:23.482800 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.485978 kubelet[2583]: E0430 03:29:23.482926 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.485978 kubelet[2583]: W0430 03:29:23.482932 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.485978 kubelet[2583]: E0430 03:29:23.482940 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.485978 kubelet[2583]: E0430 03:29:23.483064 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.485978 kubelet[2583]: W0430 03:29:23.483069 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.486201 kubelet[2583]: E0430 03:29:23.483082 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.486201 kubelet[2583]: E0430 03:29:23.483248 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.486201 kubelet[2583]: W0430 03:29:23.483254 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.486201 kubelet[2583]: E0430 03:29:23.483261 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.486201 kubelet[2583]: E0430 03:29:23.483276 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.486201 kubelet[2583]: E0430 03:29:23.483422 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.486201 kubelet[2583]: W0430 03:29:23.483429 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.486201 kubelet[2583]: E0430 03:29:23.483435 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.490738 kubelet[2583]: E0430 03:29:23.486960 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.490738 kubelet[2583]: W0430 03:29:23.486975 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.490738 kubelet[2583]: E0430 03:29:23.486989 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.491671 kubelet[2583]: E0430 03:29:23.491162 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.491671 kubelet[2583]: W0430 03:29:23.491182 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.491671 kubelet[2583]: E0430 03:29:23.491200 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.497490 kubelet[2583]: E0430 03:29:23.497466 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.497596 kubelet[2583]: W0430 03:29:23.497585 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.497702 kubelet[2583]: E0430 03:29:23.497642 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.501364 kubelet[2583]: E0430 03:29:23.501340 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.501691 kubelet[2583]: W0430 03:29:23.501361 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.501727 kubelet[2583]: E0430 03:29:23.501703 2583 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.654864 containerd[2102]: time="2025-04-30T03:29:23.654749047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6pvzz,Uid:949bfb9b-cd44-4ffb-981c-65f67bb0ba84,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:23.658835 containerd[2102]: time="2025-04-30T03:29:23.658795514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gtthh,Uid:ac5ef954-f243-4e52-85b6-a8e87e2d8d83,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:24.183711 containerd[2102]: time="2025-04-30T03:29:24.183635337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:24.184858 containerd[2102]: time="2025-04-30T03:29:24.184815089Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:24.187670 containerd[2102]: time="2025-04-30T03:29:24.186551536Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:29:24.191102 containerd[2102]: time="2025-04-30T03:29:24.191056822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:29:24.192283 containerd[2102]: time="2025-04-30T03:29:24.192229877Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:24.195140 containerd[2102]: time="2025-04-30T03:29:24.195075224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:24.196102 containerd[2102]: time="2025-04-30T03:29:24.196059819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 537.180694ms" Apr 30 03:29:24.199672 containerd[2102]: time="2025-04-30T03:29:24.199601465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.771316ms" Apr 30 03:29:24.329134 containerd[2102]: time="2025-04-30T03:29:24.328864349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:24.329134 containerd[2102]: time="2025-04-30T03:29:24.328993620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:24.329134 containerd[2102]: time="2025-04-30T03:29:24.329032826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:24.329370 containerd[2102]: time="2025-04-30T03:29:24.329199540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:24.329370 containerd[2102]: time="2025-04-30T03:29:24.329094190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:24.329461 containerd[2102]: time="2025-04-30T03:29:24.329396948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:24.329503 containerd[2102]: time="2025-04-30T03:29:24.329452644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:24.329752 containerd[2102]: time="2025-04-30T03:29:24.329610401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:24.343960 kubelet[2583]: E0430 03:29:24.343042 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:24.457636 containerd[2102]: time="2025-04-30T03:29:24.457515204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6pvzz,Uid:949bfb9b-cd44-4ffb-981c-65f67bb0ba84,Namespace:calico-system,Attempt:0,} returns sandbox id \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\"" Apr 30 03:29:24.459600 containerd[2102]: time="2025-04-30T03:29:24.459562866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gtthh,Uid:ac5ef954-f243-4e52-85b6-a8e87e2d8d83,Namespace:kube-system,Attempt:0,} returns sandbox id \"b09ecbd98cccebdaca82324c868851ab04993cc1cf16d888eafe466ad4678685\"" Apr 30 03:29:24.462041 containerd[2102]: time="2025-04-30T03:29:24.462008647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:29:24.492738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719738453.mount: Deactivated successfully. Apr 30 03:29:25.344038 kubelet[2583]: E0430 03:29:25.343992 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:25.471790 kubelet[2583]: E0430 03:29:25.471348 2583 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frgxn" podUID="69369dcc-5bb6-4835-83b2-b49f1ef80401" Apr 30 03:29:25.609220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861400284.mount: Deactivated successfully. Apr 30 03:29:25.727965 containerd[2102]: time="2025-04-30T03:29:25.727900465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:25.729050 containerd[2102]: time="2025-04-30T03:29:25.728986184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=6859697" Apr 30 03:29:25.730197 containerd[2102]: time="2025-04-30T03:29:25.730142535Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:25.733624 containerd[2102]: time="2025-04-30T03:29:25.732957503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:25.733624 containerd[2102]: time="2025-04-30T03:29:25.733468987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.271259452s" Apr 30 03:29:25.733624 containerd[2102]: time="2025-04-30T03:29:25.733498416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:29:25.734949 containerd[2102]: time="2025-04-30T03:29:25.734920416Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:29:25.736242 containerd[2102]: time="2025-04-30T03:29:25.736208229Z" level=info msg="CreateContainer within sandbox \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:29:25.759860 containerd[2102]: time="2025-04-30T03:29:25.759806726Z" level=info msg="CreateContainer within sandbox \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f\"" Apr 30 03:29:25.760789 containerd[2102]: time="2025-04-30T03:29:25.760748631Z" level=info msg="StartContainer for \"17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f\"" Apr 30 03:29:25.814780 containerd[2102]: time="2025-04-30T03:29:25.814632072Z" level=info msg="StartContainer for \"17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f\" returns successfully" Apr 30 03:29:25.907185 containerd[2102]: time="2025-04-30T03:29:25.907042646Z" level=info msg="shim disconnected" id=17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f namespace=k8s.io Apr 30 03:29:25.907185 containerd[2102]: time="2025-04-30T03:29:25.907095262Z" level=warning msg="cleaning up after shim disconnected" id=17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f namespace=k8s.io Apr 30 03:29:25.907185 containerd[2102]: time="2025-04-30T03:29:25.907103839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:26.346128 kubelet[2583]: E0430 03:29:26.344641 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:26.575505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f-rootfs.mount: Deactivated successfully. Apr 30 03:29:26.846785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262484630.mount: Deactivated successfully. Apr 30 03:29:27.331659 containerd[2102]: time="2025-04-30T03:29:27.331513530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:27.332727 containerd[2102]: time="2025-04-30T03:29:27.332661255Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 03:29:27.334113 containerd[2102]: time="2025-04-30T03:29:27.334086808Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:27.337527 containerd[2102]: time="2025-04-30T03:29:27.337465128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:27.338464 containerd[2102]: time="2025-04-30T03:29:27.338425506Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.603470597s" Apr 30 03:29:27.338464 containerd[2102]: time="2025-04-30T03:29:27.338464896Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:29:27.340195 containerd[2102]: time="2025-04-30T03:29:27.340171894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:29:27.343664 containerd[2102]: time="2025-04-30T03:29:27.341449009Z" level=info msg="CreateContainer within sandbox \"b09ecbd98cccebdaca82324c868851ab04993cc1cf16d888eafe466ad4678685\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:29:27.347408 kubelet[2583]: E0430 03:29:27.347382 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:27.359230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200552324.mount: Deactivated successfully. Apr 30 03:29:27.363594 containerd[2102]: time="2025-04-30T03:29:27.363549009Z" level=info msg="CreateContainer within sandbox \"b09ecbd98cccebdaca82324c868851ab04993cc1cf16d888eafe466ad4678685\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"daf3b61b8aa4b9d15e1bacae61a7a03acaef836a1cbf94ce54d05ecd0f3e9109\"" Apr 30 03:29:27.364324 containerd[2102]: time="2025-04-30T03:29:27.364276434Z" level=info msg="StartContainer for \"daf3b61b8aa4b9d15e1bacae61a7a03acaef836a1cbf94ce54d05ecd0f3e9109\"" Apr 30 03:29:27.428589 containerd[2102]: time="2025-04-30T03:29:27.428442120Z" level=info msg="StartContainer for \"daf3b61b8aa4b9d15e1bacae61a7a03acaef836a1cbf94ce54d05ecd0f3e9109\" returns successfully" Apr 30 03:29:27.471021 kubelet[2583]: E0430 03:29:27.470724 2583 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frgxn" podUID="69369dcc-5bb6-4835-83b2-b49f1ef80401" Apr 30 03:29:28.348668 kubelet[2583]: E0430 03:29:28.348611 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:29.349075 kubelet[2583]: E0430 03:29:29.349038 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:29.471362 kubelet[2583]: E0430 03:29:29.471319 2583 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frgxn" podUID="69369dcc-5bb6-4835-83b2-b49f1ef80401" Apr 30 03:29:30.349501 kubelet[2583]: E0430 03:29:30.349412 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:30.874760 containerd[2102]: time="2025-04-30T03:29:30.874695247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:30.876001 containerd[2102]: time="2025-04-30T03:29:30.875938262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:29:30.877151 containerd[2102]: time="2025-04-30T03:29:30.877095595Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:30.885931 containerd[2102]: time="2025-04-30T03:29:30.884230631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:30.886205 containerd[2102]: time="2025-04-30T03:29:30.886117259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.545911908s" Apr 30 03:29:30.887110 containerd[2102]: time="2025-04-30T03:29:30.887066743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:29:30.889587 containerd[2102]: time="2025-04-30T03:29:30.889548853Z" level=info msg="CreateContainer within sandbox \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:29:30.912993 containerd[2102]: time="2025-04-30T03:29:30.912947572Z" level=info msg="CreateContainer within sandbox \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438\"" Apr 30 03:29:30.913525 containerd[2102]: time="2025-04-30T03:29:30.913500327Z" level=info msg="StartContainer for \"4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438\"" Apr 30 03:29:30.968958 containerd[2102]: time="2025-04-30T03:29:30.968915624Z" level=info msg="StartContainer for \"4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438\" returns successfully" Apr 30 03:29:31.350581 kubelet[2583]: E0430 03:29:31.350457 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:31.471756 kubelet[2583]: E0430 03:29:31.471325 2583 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frgxn" podUID="69369dcc-5bb6-4835-83b2-b49f1ef80401" Apr 30 03:29:31.552034 kubelet[2583]: I0430 03:29:31.551746 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gtthh" podStartSLOduration=7.674797925 podStartE2EDuration="10.551721671s" podCreationTimestamp="2025-04-30 03:29:21 +0000 UTC" firstStartedPulling="2025-04-30 03:29:24.462498771 +0000 UTC m=+3.411027624" lastFinishedPulling="2025-04-30 03:29:27.339422505 +0000 UTC m=+6.287951370" observedRunningTime="2025-04-30 03:29:27.513777688 +0000 UTC m=+6.462306562" watchObservedRunningTime="2025-04-30 03:29:31.551721671 +0000 UTC m=+10.500250576" Apr 30 03:29:31.717701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438-rootfs.mount: Deactivated successfully. Apr 30 03:29:31.728037 kubelet[2583]: I0430 03:29:31.727848 2583 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:29:32.176391 containerd[2102]: time="2025-04-30T03:29:32.176001557Z" level=info msg="shim disconnected" id=4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438 namespace=k8s.io Apr 30 03:29:32.176391 containerd[2102]: time="2025-04-30T03:29:32.176052855Z" level=warning msg="cleaning up after shim disconnected" id=4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438 namespace=k8s.io Apr 30 03:29:32.176391 containerd[2102]: time="2025-04-30T03:29:32.176061794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:32.351046 kubelet[2583]: E0430 03:29:32.350987 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:32.526410 containerd[2102]: time="2025-04-30T03:29:32.526011403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:29:33.351810 kubelet[2583]: E0430 03:29:33.351744 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:33.474090 containerd[2102]: time="2025-04-30T03:29:33.473802919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frgxn,Uid:69369dcc-5bb6-4835-83b2-b49f1ef80401,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:33.549753 containerd[2102]: time="2025-04-30T03:29:33.549692854Z" level=error msg="Failed to destroy network for sandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:33.551970 containerd[2102]: time="2025-04-30T03:29:33.551923114Z" level=error msg="encountered an error cleaning up failed sandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:33.552104 containerd[2102]: time="2025-04-30T03:29:33.552001757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frgxn,Uid:69369dcc-5bb6-4835-83b2-b49f1ef80401,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:33.552826 kubelet[2583]: E0430 03:29:33.552270 2583 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:33.552826 kubelet[2583]: E0430 03:29:33.552366 2583 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-frgxn" Apr 30 03:29:33.552826 kubelet[2583]: E0430 03:29:33.552395 2583 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-frgxn" Apr 30 03:29:33.552977 kubelet[2583]: E0430 03:29:33.552447 2583 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-frgxn_calico-system(69369dcc-5bb6-4835-83b2-b49f1ef80401)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-frgxn_calico-system(69369dcc-5bb6-4835-83b2-b49f1ef80401)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-frgxn" podUID="69369dcc-5bb6-4835-83b2-b49f1ef80401" Apr 30 03:29:33.552892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746-shm.mount: Deactivated successfully. Apr 30 03:29:34.352098 kubelet[2583]: E0430 03:29:34.352047 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:34.530807 kubelet[2583]: I0430 03:29:34.530778 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:29:34.531718 containerd[2102]: time="2025-04-30T03:29:34.531376868Z" level=info msg="StopPodSandbox for \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\"" Apr 30 03:29:34.531718 containerd[2102]: time="2025-04-30T03:29:34.531531702Z" level=info msg="Ensure that sandbox 235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746 in task-service has been cleanup successfully" Apr 30 03:29:34.559475 containerd[2102]: time="2025-04-30T03:29:34.559426559Z" level=error msg="StopPodSandbox for \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\" failed" error="failed to destroy network for sandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.559727 kubelet[2583]: E0430 03:29:34.559692 2583 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:29:34.559793 kubelet[2583]: E0430 03:29:34.559753 2583 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746"} Apr 30 03:29:34.559825 kubelet[2583]: E0430 03:29:34.559807 2583 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69369dcc-5bb6-4835-83b2-b49f1ef80401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:34.559895 kubelet[2583]: E0430 03:29:34.559831 2583 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69369dcc-5bb6-4835-83b2-b49f1ef80401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-frgxn" podUID="69369dcc-5bb6-4835-83b2-b49f1ef80401" Apr 30 03:29:34.951742 kubelet[2583]: I0430 03:29:34.951694 2583 topology_manager.go:215] "Topology Admit Handler" podUID="abcfaef6-2ff2-40a4-acd6-df403883e1ea" podNamespace="default" podName="nginx-deployment-85f456d6dd-f6xt7" Apr 30 03:29:34.957193 kubelet[2583]: W0430 03:29:34.957134 2583 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172.31.17.153" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '172.31.17.153' and this object Apr 30 03:29:34.957193 kubelet[2583]: E0430 03:29:34.957174 2583 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172.31.17.153" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '172.31.17.153' and this object Apr 30 03:29:35.058671 kubelet[2583]: I0430 03:29:35.057014 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hgvk\" (UniqueName: \"kubernetes.io/projected/abcfaef6-2ff2-40a4-acd6-df403883e1ea-kube-api-access-5hgvk\") pod \"nginx-deployment-85f456d6dd-f6xt7\" (UID: \"abcfaef6-2ff2-40a4-acd6-df403883e1ea\") " pod="default/nginx-deployment-85f456d6dd-f6xt7" Apr 30 03:29:35.352980 kubelet[2583]: E0430 03:29:35.352682 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:36.157107 containerd[2102]: time="2025-04-30T03:29:36.157011987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-f6xt7,Uid:abcfaef6-2ff2-40a4-acd6-df403883e1ea,Namespace:default,Attempt:0,}" Apr 30 03:29:36.319897 containerd[2102]: time="2025-04-30T03:29:36.319688281Z" level=error msg="Failed to destroy network for sandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:36.320200 containerd[2102]: time="2025-04-30T03:29:36.319989022Z" level=error msg="encountered an error cleaning up failed sandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:36.320200 containerd[2102]: time="2025-04-30T03:29:36.320037973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-f6xt7,Uid:abcfaef6-2ff2-40a4-acd6-df403883e1ea,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:36.322804 kubelet[2583]: E0430 03:29:36.320867 2583 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:36.322804 kubelet[2583]: E0430 03:29:36.320922 2583 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-f6xt7" Apr 30 03:29:36.322804 kubelet[2583]: E0430 03:29:36.320942 2583 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-f6xt7" Apr 30 03:29:36.323198 kubelet[2583]: E0430 03:29:36.320981 2583 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-f6xt7_default(abcfaef6-2ff2-40a4-acd6-df403883e1ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-f6xt7_default(abcfaef6-2ff2-40a4-acd6-df403883e1ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-f6xt7" podUID="abcfaef6-2ff2-40a4-acd6-df403883e1ea" Apr 30 03:29:36.323871 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d-shm.mount: Deactivated successfully. Apr 30 03:29:36.353568 kubelet[2583]: E0430 03:29:36.353378 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:36.538158 kubelet[2583]: I0430 03:29:36.537852 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:29:36.539078 containerd[2102]: time="2025-04-30T03:29:36.538602459Z" level=info msg="StopPodSandbox for \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\"" Apr 30 03:29:36.539078 containerd[2102]: time="2025-04-30T03:29:36.538817238Z" level=info msg="Ensure that sandbox dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d in task-service has been cleanup successfully" Apr 30 03:29:36.589746 containerd[2102]: time="2025-04-30T03:29:36.589696735Z" level=error msg="StopPodSandbox for \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\" failed" error="failed to destroy network for sandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:36.589978 kubelet[2583]: E0430 03:29:36.589928 2583 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:29:36.590067 kubelet[2583]: E0430 03:29:36.589988 2583 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d"} Apr 30 03:29:36.590067 kubelet[2583]: E0430 03:29:36.590033 2583 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"abcfaef6-2ff2-40a4-acd6-df403883e1ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:36.590211 kubelet[2583]: E0430 03:29:36.590064 2583 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"abcfaef6-2ff2-40a4-acd6-df403883e1ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-f6xt7" podUID="abcfaef6-2ff2-40a4-acd6-df403883e1ea" Apr 30 03:29:37.353667 kubelet[2583]: E0430 03:29:37.353615 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:38.165992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3775764624.mount: Deactivated successfully. Apr 30 03:29:38.250373 containerd[2102]: time="2025-04-30T03:29:38.250186503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:38.260468 containerd[2102]: time="2025-04-30T03:29:38.260392784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:29:38.266615 containerd[2102]: time="2025-04-30T03:29:38.266542170Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:38.276960 containerd[2102]: time="2025-04-30T03:29:38.276880714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:38.277590 containerd[2102]: time="2025-04-30T03:29:38.277430221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 5.751357559s" Apr 30 03:29:38.277590 containerd[2102]: time="2025-04-30T03:29:38.277476087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:29:38.295566 containerd[2102]: time="2025-04-30T03:29:38.295530651Z" level=info msg="CreateContainer within sandbox \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:29:38.354144 kubelet[2583]: E0430 03:29:38.354082 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:38.357543 containerd[2102]: time="2025-04-30T03:29:38.357476876Z" level=info msg="CreateContainer within sandbox \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\"" Apr 30 03:29:38.358374 containerd[2102]: time="2025-04-30T03:29:38.358335358Z" level=info msg="StartContainer for \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\"" Apr 30 03:29:38.462595 containerd[2102]: time="2025-04-30T03:29:38.461628369Z" level=info msg="StartContainer for \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\" returns successfully" Apr 30 03:29:38.541672 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:29:38.541789 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:29:39.355228 kubelet[2583]: E0430 03:29:39.355170 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:39.549391 kubelet[2583]: I0430 03:29:39.549361 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:40.078670 kernel: bpftool[3332]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:29:40.343244 (udev-worker)[3106]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:40.343631 systemd-networkd[1654]: vxlan.calico: Link UP Apr 30 03:29:40.343637 systemd-networkd[1654]: vxlan.calico: Gained carrier Apr 30 03:29:40.359813 kubelet[2583]: E0430 03:29:40.358817 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:40.381143 (udev-worker)[3380]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:41.341588 kubelet[2583]: E0430 03:29:41.341533 2583 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:41.359959 kubelet[2583]: E0430 03:29:41.359889 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:42.088316 systemd-networkd[1654]: vxlan.calico: Gained IPv6LL Apr 30 03:29:42.223919 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 03:29:42.360323 kubelet[2583]: E0430 03:29:42.360202 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:43.360921 kubelet[2583]: E0430 03:29:43.360874 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:44.361771 kubelet[2583]: E0430 03:29:44.361717 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:44.371528 ntpd[2056]: Listen normally on 6 vxlan.calico 192.168.66.128:123 Apr 30 03:29:44.371604 ntpd[2056]: Listen normally on 7 vxlan.calico [fe80::64ca:35ff:fe87:76a0%3]:123 Apr 30 03:29:44.372082 ntpd[2056]: 30 Apr 03:29:44 ntpd[2056]: Listen normally on 6 vxlan.calico 192.168.66.128:123 Apr 30 03:29:44.372082 ntpd[2056]: 30 Apr 03:29:44 ntpd[2056]: Listen normally on 7 vxlan.calico [fe80::64ca:35ff:fe87:76a0%3]:123 Apr 30 03:29:45.362745 kubelet[2583]: E0430 03:29:45.362691 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:46.362944 kubelet[2583]: E0430 03:29:46.362878 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:47.363983 kubelet[2583]: E0430 03:29:47.363926 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:47.475382 containerd[2102]: time="2025-04-30T03:29:47.475019842Z" level=info msg="StopPodSandbox for \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\"" Apr 30 03:29:47.475382 containerd[2102]: time="2025-04-30T03:29:47.475020064Z" level=info msg="StopPodSandbox for \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\"" Apr 30 03:29:47.627696 kubelet[2583]: I0430 03:29:47.627254 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6pvzz" podStartSLOduration=12.810155035 podStartE2EDuration="26.627229726s" podCreationTimestamp="2025-04-30 03:29:21 +0000 UTC" firstStartedPulling="2025-04-30 03:29:24.461424607 +0000 UTC m=+3.409953471" lastFinishedPulling="2025-04-30 03:29:38.278499309 +0000 UTC m=+17.227028162" observedRunningTime="2025-04-30 03:29:38.572485704 +0000 UTC m=+17.521014575" watchObservedRunningTime="2025-04-30 03:29:47.627229726 +0000 UTC m=+26.575758579" Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.631 [INFO][3470] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.631 [INFO][3470] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" iface="eth0" netns="/var/run/netns/cni-d72b3231-fc2a-90c7-451e-09078e9eec37" Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.631 [INFO][3470] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" iface="eth0" netns="/var/run/netns/cni-d72b3231-fc2a-90c7-451e-09078e9eec37" Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.632 [INFO][3470] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" iface="eth0" netns="/var/run/netns/cni-d72b3231-fc2a-90c7-451e-09078e9eec37" Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.632 [INFO][3470] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.632 [INFO][3470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.743 [INFO][3483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" HandleID="k8s-pod-network.235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Workload="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.745 [INFO][3483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.745 [INFO][3483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.756 [WARNING][3483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" HandleID="k8s-pod-network.235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Workload="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.756 [INFO][3483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" HandleID="k8s-pod-network.235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Workload="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.760 [INFO][3483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:47.763268 containerd[2102]: 2025-04-30 03:29:47.761 [INFO][3470] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:29:47.766728 containerd[2102]: time="2025-04-30T03:29:47.763716993Z" level=info msg="TearDown network for sandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\" successfully" Apr 30 03:29:47.766728 containerd[2102]: time="2025-04-30T03:29:47.763741990Z" level=info msg="StopPodSandbox for \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\" returns successfully" Apr 30 03:29:47.766728 containerd[2102]: time="2025-04-30T03:29:47.764423802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frgxn,Uid:69369dcc-5bb6-4835-83b2-b49f1ef80401,Namespace:calico-system,Attempt:1,}" Apr 30 03:29:47.767509 systemd[1]: run-netns-cni\x2dd72b3231\x2dfc2a\x2d90c7\x2d451e\x2d09078e9eec37.mount: Deactivated successfully. Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.627 [INFO][3469] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.628 [INFO][3469] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" iface="eth0" netns="/var/run/netns/cni-e7198222-049e-5e9e-c4a3-ab3c1ebb07b4" Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.628 [INFO][3469] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" iface="eth0" netns="/var/run/netns/cni-e7198222-049e-5e9e-c4a3-ab3c1ebb07b4" Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.630 [INFO][3469] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" iface="eth0" netns="/var/run/netns/cni-e7198222-049e-5e9e-c4a3-ab3c1ebb07b4" Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.630 [INFO][3469] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.630 [INFO][3469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.746 [INFO][3484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" HandleID="k8s-pod-network.dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Workload="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.746 [INFO][3484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.760 [INFO][3484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.769 [WARNING][3484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" HandleID="k8s-pod-network.dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Workload="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.769 [INFO][3484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" HandleID="k8s-pod-network.dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Workload="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.772 [INFO][3484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:47.776108 containerd[2102]: 2025-04-30 03:29:47.773 [INFO][3469] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:29:47.777804 containerd[2102]: time="2025-04-30T03:29:47.777765223Z" level=info msg="TearDown network for sandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\" successfully" Apr 30 03:29:47.777804 containerd[2102]: time="2025-04-30T03:29:47.777802874Z" level=info msg="StopPodSandbox for \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\" returns successfully" Apr 30 03:29:47.779453 containerd[2102]: time="2025-04-30T03:29:47.778477254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-f6xt7,Uid:abcfaef6-2ff2-40a4-acd6-df403883e1ea,Namespace:default,Attempt:1,}" Apr 30 03:29:47.779054 systemd[1]: run-netns-cni\x2de7198222\x2d049e\x2d5e9e\x2dc4a3\x2dab3c1ebb07b4.mount: Deactivated successfully. Apr 30 03:29:47.980007 systemd-networkd[1654]: cali2c2a90f8240: Link UP Apr 30 03:29:47.982149 systemd-networkd[1654]: cali2c2a90f8240: Gained carrier Apr 30 03:29:47.986334 (udev-worker)[3534]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.854 [INFO][3496] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.153-k8s-csi--node--driver--frgxn-eth0 csi-node-driver- calico-system 69369dcc-5bb6-4835-83b2-b49f1ef80401 1043 0 2025-04-30 03:29:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.17.153 csi-node-driver-frgxn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2c2a90f8240 [] []}} ContainerID="30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" Namespace="calico-system" Pod="csi-node-driver-frgxn" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--frgxn-" Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.855 [INFO][3496] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" Namespace="calico-system" Pod="csi-node-driver-frgxn" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.906 [INFO][3520] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" HandleID="k8s-pod-network.30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" Workload="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.922 [INFO][3520] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" HandleID="k8s-pod-network.30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" Workload="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b3e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.17.153", "pod":"csi-node-driver-frgxn", "timestamp":"2025-04-30 03:29:47.906530591 +0000 UTC"}, Hostname:"172.31.17.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.922 [INFO][3520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.922 [INFO][3520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.922 [INFO][3520] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.153' Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.925 [INFO][3520] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" host="172.31.17.153" Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.935 [INFO][3520] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.153" Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.941 [INFO][3520] ipam/ipam.go 489: Trying affinity for 192.168.66.128/26 host="172.31.17.153" Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.946 [INFO][3520] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.128/26 host="172.31.17.153" Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.950 [INFO][3520] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="172.31.17.153" Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.950 [INFO][3520] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" host="172.31.17.153" Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.953 [INFO][3520] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20 Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.961 [INFO][3520] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" host="172.31.17.153" Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.972 [INFO][3520] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.129/26] block=192.168.66.128/26 handle="k8s-pod-network.30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" host="172.31.17.153" Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.972 [INFO][3520] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.129/26] handle="k8s-pod-network.30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" host="172.31.17.153" Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.972 [INFO][3520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:48.009847 containerd[2102]: 2025-04-30 03:29:47.972 [INFO][3520] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.129/26] IPv6=[] ContainerID="30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" HandleID="k8s-pod-network.30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" Workload="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:29:48.011524 containerd[2102]: 2025-04-30 03:29:47.974 [INFO][3496] cni-plugin/k8s.go 386: Populated endpoint ContainerID="30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" Namespace="calico-system" Pod="csi-node-driver-frgxn" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-csi--node--driver--frgxn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69369dcc-5bb6-4835-83b2-b49f1ef80401", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"", Pod:"csi-node-driver-frgxn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2c2a90f8240", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.011524 containerd[2102]: 2025-04-30 03:29:47.974 [INFO][3496] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.129/32] ContainerID="30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" Namespace="calico-system" Pod="csi-node-driver-frgxn" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:29:48.011524 containerd[2102]: 2025-04-30 03:29:47.974 [INFO][3496] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c2a90f8240 ContainerID="30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" Namespace="calico-system" Pod="csi-node-driver-frgxn" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:29:48.011524 containerd[2102]: 2025-04-30 03:29:47.982 [INFO][3496] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" Namespace="calico-system" Pod="csi-node-driver-frgxn" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:29:48.011524 containerd[2102]: 2025-04-30 03:29:47.984 [INFO][3496] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" Namespace="calico-system" Pod="csi-node-driver-frgxn" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-csi--node--driver--frgxn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69369dcc-5bb6-4835-83b2-b49f1ef80401", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20", Pod:"csi-node-driver-frgxn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2c2a90f8240", MAC:"7a:20:a1:a6:2a:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.011524 containerd[2102]: 2025-04-30 03:29:48.008 [INFO][3496] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20" Namespace="calico-system" Pod="csi-node-driver-frgxn" WorkloadEndpoint="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:29:48.038115 containerd[2102]: time="2025-04-30T03:29:48.037672187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:48.038115 containerd[2102]: time="2025-04-30T03:29:48.037752536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:48.038115 containerd[2102]: time="2025-04-30T03:29:48.037776093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:48.038115 containerd[2102]: time="2025-04-30T03:29:48.037915904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:48.044795 systemd-networkd[1654]: calif3b2e0f6a5c: Link UP Apr 30 03:29:48.045967 systemd-networkd[1654]: calif3b2e0f6a5c: Gained carrier Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:47.867 [INFO][3508] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0 nginx-deployment-85f456d6dd- default abcfaef6-2ff2-40a4-acd6-df403883e1ea 1042 0 2025-04-30 03:29:34 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.17.153 nginx-deployment-85f456d6dd-f6xt7 eth0 default [] [] [kns.default ksa.default.default] calif3b2e0f6a5c [] []}} ContainerID="8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" Namespace="default" Pod="nginx-deployment-85f456d6dd-f6xt7" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-" Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:47.867 [INFO][3508] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" Namespace="default" Pod="nginx-deployment-85f456d6dd-f6xt7" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:47.912 [INFO][3525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" HandleID="k8s-pod-network.8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" Workload="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:47.926 [INFO][3525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" HandleID="k8s-pod-network.8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" Workload="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002873f0), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.153", "pod":"nginx-deployment-85f456d6dd-f6xt7", "timestamp":"2025-04-30 03:29:47.912502417 +0000 UTC"}, Hostname:"172.31.17.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:47.926 [INFO][3525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:47.972 [INFO][3525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:47.972 [INFO][3525] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.153' Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:47.976 [INFO][3525] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" host="172.31.17.153" Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:47.985 [INFO][3525] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.153" Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:47.997 [INFO][3525] ipam/ipam.go 489: Trying affinity for 192.168.66.128/26 host="172.31.17.153" Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:48.008 [INFO][3525] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.128/26 host="172.31.17.153" Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:48.013 [INFO][3525] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="172.31.17.153" Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:48.013 [INFO][3525] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" host="172.31.17.153" Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:48.016 [INFO][3525] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:48.026 [INFO][3525] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" host="172.31.17.153" Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:48.034 [INFO][3525] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.130/26] block=192.168.66.128/26 handle="k8s-pod-network.8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" host="172.31.17.153" Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:48.035 [INFO][3525] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.130/26] handle="k8s-pod-network.8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" host="172.31.17.153" Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:48.035 [INFO][3525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:48.073936 containerd[2102]: 2025-04-30 03:29:48.035 [INFO][3525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.130/26] IPv6=[] ContainerID="8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" HandleID="k8s-pod-network.8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" Workload="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:29:48.078914 containerd[2102]: 2025-04-30 03:29:48.039 [INFO][3508] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" Namespace="default" Pod="nginx-deployment-85f456d6dd-f6xt7" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"abcfaef6-2ff2-40a4-acd6-df403883e1ea", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-f6xt7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif3b2e0f6a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.078914 containerd[2102]: 2025-04-30 03:29:48.039 [INFO][3508] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.130/32] ContainerID="8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" Namespace="default" Pod="nginx-deployment-85f456d6dd-f6xt7" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:29:48.078914 containerd[2102]: 2025-04-30 03:29:48.039 [INFO][3508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3b2e0f6a5c ContainerID="8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" Namespace="default" Pod="nginx-deployment-85f456d6dd-f6xt7" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:29:48.078914 containerd[2102]: 2025-04-30 03:29:48.047 [INFO][3508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" Namespace="default" Pod="nginx-deployment-85f456d6dd-f6xt7" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:29:48.078914 containerd[2102]: 2025-04-30 03:29:48.048 [INFO][3508] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" Namespace="default" Pod="nginx-deployment-85f456d6dd-f6xt7" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"abcfaef6-2ff2-40a4-acd6-df403883e1ea", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad", Pod:"nginx-deployment-85f456d6dd-f6xt7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif3b2e0f6a5c", MAC:"12:29:ad:c6:0c:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:48.078914 containerd[2102]: 2025-04-30 03:29:48.065 [INFO][3508] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad" Namespace="default" Pod="nginx-deployment-85f456d6dd-f6xt7" WorkloadEndpoint="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:29:48.109505 containerd[2102]: time="2025-04-30T03:29:48.109060456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frgxn,Uid:69369dcc-5bb6-4835-83b2-b49f1ef80401,Namespace:calico-system,Attempt:1,} returns sandbox id \"30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20\"" Apr 30 03:29:48.111882 containerd[2102]: time="2025-04-30T03:29:48.111846038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:29:48.123242 containerd[2102]: time="2025-04-30T03:29:48.123143141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:48.123445 containerd[2102]: time="2025-04-30T03:29:48.123208790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:48.123445 containerd[2102]: time="2025-04-30T03:29:48.123239918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:48.123445 containerd[2102]: time="2025-04-30T03:29:48.123419828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:48.186007 containerd[2102]: time="2025-04-30T03:29:48.185968259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-f6xt7,Uid:abcfaef6-2ff2-40a4-acd6-df403883e1ea,Namespace:default,Attempt:1,} returns sandbox id \"8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad\"" Apr 30 03:29:48.364885 kubelet[2583]: E0430 03:29:48.364754 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:49.065608 systemd-networkd[1654]: cali2c2a90f8240: Gained IPv6LL Apr 30 03:29:49.192037 systemd-networkd[1654]: calif3b2e0f6a5c: Gained IPv6LL Apr 30 03:29:49.206975 kubelet[2583]: I0430 03:29:49.205978 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:49.365472 kubelet[2583]: E0430 03:29:49.365319 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:49.514665 containerd[2102]: time="2025-04-30T03:29:49.514570183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:49.516589 containerd[2102]: time="2025-04-30T03:29:49.516528962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:29:49.517984 containerd[2102]: time="2025-04-30T03:29:49.517927341Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:49.521677 containerd[2102]: time="2025-04-30T03:29:49.521581874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:49.522980 containerd[2102]: time="2025-04-30T03:29:49.522523860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.410506767s" Apr 30 03:29:49.522980 containerd[2102]: time="2025-04-30T03:29:49.522568827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:29:49.524102 containerd[2102]: time="2025-04-30T03:29:49.524076346Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Apr 30 03:29:49.525711 containerd[2102]: time="2025-04-30T03:29:49.525591771Z" level=info msg="CreateContainer within sandbox \"30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:29:49.550175 containerd[2102]: time="2025-04-30T03:29:49.550036822Z" level=info msg="CreateContainer within sandbox \"30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"54e724b143492c195a4210ca9a28647d90b3a6817b1194f4d5582cae39714606\"" Apr 30 03:29:49.552680 containerd[2102]: time="2025-04-30T03:29:49.551270752Z" level=info msg="StartContainer for \"54e724b143492c195a4210ca9a28647d90b3a6817b1194f4d5582cae39714606\"" Apr 30 03:29:49.622420 containerd[2102]: time="2025-04-30T03:29:49.622154004Z" level=info msg="StartContainer for \"54e724b143492c195a4210ca9a28647d90b3a6817b1194f4d5582cae39714606\" returns successfully" Apr 30 03:29:50.367187 kubelet[2583]: E0430 03:29:50.367126 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:51.368302 kubelet[2583]: E0430 03:29:51.368211 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:51.371500 ntpd[2056]: Listen normally on 8 cali2c2a90f8240 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 30 03:29:51.372597 ntpd[2056]: 30 Apr 03:29:51 ntpd[2056]: Listen normally on 8 cali2c2a90f8240 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 30 03:29:51.372597 ntpd[2056]: 30 Apr 03:29:51 ntpd[2056]: Listen normally on 9 calif3b2e0f6a5c [fe80::ecee:eeff:feee:eeee%7]:123 Apr 30 03:29:51.371578 ntpd[2056]: Listen normally on 9 calif3b2e0f6a5c [fe80::ecee:eeff:feee:eeee%7]:123 Apr 30 03:29:51.513961 kubelet[2583]: I0430 03:29:51.513917 2583 topology_manager.go:215] "Topology Admit Handler" podUID="80ea38d8-a344-4083-8ef9-a258b3cb8a79" podNamespace="calico-system" podName="calico-typha-8479c68f45-5lhwj" Apr 30 03:29:51.573604 kubelet[2583]: I0430 03:29:51.572639 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80ea38d8-a344-4083-8ef9-a258b3cb8a79-tigera-ca-bundle\") pod \"calico-typha-8479c68f45-5lhwj\" (UID: \"80ea38d8-a344-4083-8ef9-a258b3cb8a79\") " pod="calico-system/calico-typha-8479c68f45-5lhwj" Apr 30 03:29:51.573604 kubelet[2583]: I0430 03:29:51.572713 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/80ea38d8-a344-4083-8ef9-a258b3cb8a79-typha-certs\") pod \"calico-typha-8479c68f45-5lhwj\" (UID: \"80ea38d8-a344-4083-8ef9-a258b3cb8a79\") " pod="calico-system/calico-typha-8479c68f45-5lhwj" Apr 30 03:29:51.573604 kubelet[2583]: I0430 03:29:51.572746 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trr95\" (UniqueName: \"kubernetes.io/projected/80ea38d8-a344-4083-8ef9-a258b3cb8a79-kube-api-access-trr95\") pod \"calico-typha-8479c68f45-5lhwj\" (UID: \"80ea38d8-a344-4083-8ef9-a258b3cb8a79\") " pod="calico-system/calico-typha-8479c68f45-5lhwj" Apr 30 03:29:51.701276 containerd[2102]: time="2025-04-30T03:29:51.700841602Z" level=info msg="StopContainer for \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\" with timeout 5 (s)" Apr 30 03:29:51.702277 containerd[2102]: time="2025-04-30T03:29:51.701762286Z" level=info msg="Stop container \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\" with signal terminated" Apr 30 03:29:51.774273 containerd[2102]: time="2025-04-30T03:29:51.774191450Z" level=info msg="shim disconnected" id=9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976 namespace=k8s.io Apr 30 03:29:51.774273 containerd[2102]: time="2025-04-30T03:29:51.774273590Z" level=warning msg="cleaning up after shim disconnected" id=9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976 namespace=k8s.io Apr 30 03:29:51.774468 containerd[2102]: time="2025-04-30T03:29:51.774285457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:51.777544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976-rootfs.mount: Deactivated successfully. Apr 30 03:29:51.821359 containerd[2102]: time="2025-04-30T03:29:51.820978194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8479c68f45-5lhwj,Uid:80ea38d8-a344-4083-8ef9-a258b3cb8a79,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:51.876899 containerd[2102]: time="2025-04-30T03:29:51.876419864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:51.876899 containerd[2102]: time="2025-04-30T03:29:51.876467647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:51.876899 containerd[2102]: time="2025-04-30T03:29:51.876491886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:51.876899 containerd[2102]: time="2025-04-30T03:29:51.876734755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:51.902447 containerd[2102]: time="2025-04-30T03:29:51.902380352Z" level=info msg="StopContainer for \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\" returns successfully" Apr 30 03:29:51.904897 containerd[2102]: time="2025-04-30T03:29:51.904862457Z" level=info msg="StopPodSandbox for \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\"" Apr 30 03:29:51.905012 containerd[2102]: time="2025-04-30T03:29:51.904906807Z" level=info msg="Container to stop \"17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:29:51.905012 containerd[2102]: time="2025-04-30T03:29:51.904918863Z" level=info msg="Container to stop \"4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:29:51.905012 containerd[2102]: time="2025-04-30T03:29:51.904930041Z" level=info msg="Container to stop \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:29:51.913619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895-shm.mount: Deactivated successfully. Apr 30 03:29:51.975250 containerd[2102]: time="2025-04-30T03:29:51.975136560Z" level=info msg="shim disconnected" id=782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895 namespace=k8s.io Apr 30 03:29:51.976225 containerd[2102]: time="2025-04-30T03:29:51.975491446Z" level=warning msg="cleaning up after shim disconnected" id=782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895 namespace=k8s.io Apr 30 03:29:51.976362 containerd[2102]: time="2025-04-30T03:29:51.976345921Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:51.977989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895-rootfs.mount: Deactivated successfully. Apr 30 03:29:51.987023 containerd[2102]: time="2025-04-30T03:29:51.986981013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8479c68f45-5lhwj,Uid:80ea38d8-a344-4083-8ef9-a258b3cb8a79,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d64a7ab4c8d4cf7b830a396370637d99df8f6d21fb901e7649329823da22b29\"" Apr 30 03:29:51.996554 containerd[2102]: time="2025-04-30T03:29:51.996480453Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:29:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:29:52.020752 containerd[2102]: time="2025-04-30T03:29:52.020591278Z" level=info msg="TearDown network for sandbox \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\" successfully" Apr 30 03:29:52.020752 containerd[2102]: time="2025-04-30T03:29:52.020620728Z" level=info msg="StopPodSandbox for \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\" returns successfully" Apr 30 03:29:52.077256 kubelet[2583]: I0430 03:29:52.077181 2583 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-var-run-calico\") pod \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " Apr 30 03:29:52.077421 kubelet[2583]: I0430 03:29:52.077401 2583 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-tigera-ca-bundle\") pod \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " Apr 30 03:29:52.077460 kubelet[2583]: I0430 03:29:52.077433 2583 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km7gc\" (UniqueName: \"kubernetes.io/projected/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-kube-api-access-km7gc\") pod \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " Apr 30 03:29:52.077460 kubelet[2583]: I0430 03:29:52.077450 2583 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-cni-net-dir\") pod \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " Apr 30 03:29:52.077544 kubelet[2583]: I0430 03:29:52.077464 2583 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-policysync\") pod \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " Apr 30 03:29:52.077544 kubelet[2583]: I0430 03:29:52.077480 2583 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-var-lib-calico\") pod \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " Apr 30 03:29:52.077544 kubelet[2583]: I0430 03:29:52.077494 2583 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-flexvol-driver-host\") pod \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " Apr 30 03:29:52.077544 kubelet[2583]: I0430 03:29:52.077508 2583 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-lib-modules\") pod \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " Apr 30 03:29:52.077544 kubelet[2583]: I0430 03:29:52.077522 2583 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-cni-log-dir\") pod \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " Apr 30 03:29:52.077544 kubelet[2583]: I0430 03:29:52.077539 2583 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-node-certs\") pod \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " Apr 30 03:29:52.077752 kubelet[2583]: I0430 03:29:52.077553 2583 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-cni-bin-dir\") pod \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " Apr 30 03:29:52.077752 kubelet[2583]: I0430 03:29:52.077567 2583 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-xtables-lock\") pod \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\" (UID: \"949bfb9b-cd44-4ffb-981c-65f67bb0ba84\") " Apr 30 03:29:52.077752 kubelet[2583]: I0430 03:29:52.077615 2583 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "949bfb9b-cd44-4ffb-981c-65f67bb0ba84" (UID: "949bfb9b-cd44-4ffb-981c-65f67bb0ba84"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:52.079745 kubelet[2583]: I0430 03:29:52.079710 2583 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "949bfb9b-cd44-4ffb-981c-65f67bb0ba84" (UID: "949bfb9b-cd44-4ffb-981c-65f67bb0ba84"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:52.080482 kubelet[2583]: I0430 03:29:52.080450 2583 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "949bfb9b-cd44-4ffb-981c-65f67bb0ba84" (UID: "949bfb9b-cd44-4ffb-981c-65f67bb0ba84"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:52.080696 kubelet[2583]: I0430 03:29:52.080681 2583 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "949bfb9b-cd44-4ffb-981c-65f67bb0ba84" (UID: "949bfb9b-cd44-4ffb-981c-65f67bb0ba84"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:52.080863 kubelet[2583]: I0430 03:29:52.080848 2583 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-policysync" (OuterVolumeSpecName: "policysync") pod "949bfb9b-cd44-4ffb-981c-65f67bb0ba84" (UID: "949bfb9b-cd44-4ffb-981c-65f67bb0ba84"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:52.081033 kubelet[2583]: I0430 03:29:52.081020 2583 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "949bfb9b-cd44-4ffb-981c-65f67bb0ba84" (UID: "949bfb9b-cd44-4ffb-981c-65f67bb0ba84"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:52.081234 kubelet[2583]: I0430 03:29:52.081102 2583 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "949bfb9b-cd44-4ffb-981c-65f67bb0ba84" (UID: "949bfb9b-cd44-4ffb-981c-65f67bb0ba84"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:52.081234 kubelet[2583]: I0430 03:29:52.081122 2583 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "949bfb9b-cd44-4ffb-981c-65f67bb0ba84" (UID: "949bfb9b-cd44-4ffb-981c-65f67bb0ba84"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:52.081804 kubelet[2583]: I0430 03:29:52.081695 2583 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "949bfb9b-cd44-4ffb-981c-65f67bb0ba84" (UID: "949bfb9b-cd44-4ffb-981c-65f67bb0ba84"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:29:52.083116 kubelet[2583]: I0430 03:29:52.083090 2583 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-kube-api-access-km7gc" (OuterVolumeSpecName: "kube-api-access-km7gc") pod "949bfb9b-cd44-4ffb-981c-65f67bb0ba84" (UID: "949bfb9b-cd44-4ffb-981c-65f67bb0ba84"). InnerVolumeSpecName "kube-api-access-km7gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:29:52.083555 kubelet[2583]: I0430 03:29:52.083493 2583 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "949bfb9b-cd44-4ffb-981c-65f67bb0ba84" (UID: "949bfb9b-cd44-4ffb-981c-65f67bb0ba84"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:29:52.085103 kubelet[2583]: I0430 03:29:52.085056 2583 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-node-certs" (OuterVolumeSpecName: "node-certs") pod "949bfb9b-cd44-4ffb-981c-65f67bb0ba84" (UID: "949bfb9b-cd44-4ffb-981c-65f67bb0ba84"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:29:52.095577 kubelet[2583]: I0430 03:29:52.095530 2583 topology_manager.go:215] "Topology Admit Handler" podUID="7c3355e6-d6da-4ed9-abd7-0ede97487f11" podNamespace="calico-system" podName="calico-node-xrmg2" Apr 30 03:29:52.095577 kubelet[2583]: E0430 03:29:52.095586 2583 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="949bfb9b-cd44-4ffb-981c-65f67bb0ba84" containerName="calico-node" Apr 30 03:29:52.095741 kubelet[2583]: E0430 03:29:52.095596 2583 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="949bfb9b-cd44-4ffb-981c-65f67bb0ba84" containerName="flexvol-driver" Apr 30 03:29:52.095741 kubelet[2583]: E0430 03:29:52.095603 2583 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="949bfb9b-cd44-4ffb-981c-65f67bb0ba84" containerName="install-cni" Apr 30 03:29:52.095741 kubelet[2583]: I0430 03:29:52.095625 2583 memory_manager.go:354] "RemoveStaleState removing state" podUID="949bfb9b-cd44-4ffb-981c-65f67bb0ba84" containerName="calico-node" Apr 30 03:29:52.178484 kubelet[2583]: I0430 03:29:52.178444 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7c3355e6-d6da-4ed9-abd7-0ede97487f11-node-certs\") pod \"calico-node-xrmg2\" (UID: \"7c3355e6-d6da-4ed9-abd7-0ede97487f11\") " pod="calico-system/calico-node-xrmg2" Apr 30 03:29:52.178484 kubelet[2583]: I0430 03:29:52.178493 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7c3355e6-d6da-4ed9-abd7-0ede97487f11-var-run-calico\") pod \"calico-node-xrmg2\" (UID: \"7c3355e6-d6da-4ed9-abd7-0ede97487f11\") " pod="calico-system/calico-node-xrmg2" Apr 30 03:29:52.178631 kubelet[2583]: I0430 03:29:52.178510 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7c3355e6-d6da-4ed9-abd7-0ede97487f11-cni-log-dir\") pod \"calico-node-xrmg2\" (UID: \"7c3355e6-d6da-4ed9-abd7-0ede97487f11\") " pod="calico-system/calico-node-xrmg2" Apr 30 03:29:52.178631 kubelet[2583]: I0430 03:29:52.178527 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c3355e6-d6da-4ed9-abd7-0ede97487f11-xtables-lock\") pod \"calico-node-xrmg2\" (UID: \"7c3355e6-d6da-4ed9-abd7-0ede97487f11\") " pod="calico-system/calico-node-xrmg2" Apr 30 03:29:52.178631 kubelet[2583]: I0430 03:29:52.178541 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7c3355e6-d6da-4ed9-abd7-0ede97487f11-policysync\") pod \"calico-node-xrmg2\" (UID: \"7c3355e6-d6da-4ed9-abd7-0ede97487f11\") " pod="calico-system/calico-node-xrmg2" Apr 30 03:29:52.178631 kubelet[2583]: I0430 03:29:52.178555 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c3355e6-d6da-4ed9-abd7-0ede97487f11-tigera-ca-bundle\") pod \"calico-node-xrmg2\" (UID: \"7c3355e6-d6da-4ed9-abd7-0ede97487f11\") " pod="calico-system/calico-node-xrmg2" Apr 30 03:29:52.178631 kubelet[2583]: I0430 03:29:52.178573 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7c3355e6-d6da-4ed9-abd7-0ede97487f11-cni-net-dir\") pod \"calico-node-xrmg2\" (UID: \"7c3355e6-d6da-4ed9-abd7-0ede97487f11\") " pod="calico-system/calico-node-xrmg2" Apr 30 03:29:52.178785 kubelet[2583]: I0430 03:29:52.178590 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c3355e6-d6da-4ed9-abd7-0ede97487f11-lib-modules\") pod \"calico-node-xrmg2\" (UID: \"7c3355e6-d6da-4ed9-abd7-0ede97487f11\") " pod="calico-system/calico-node-xrmg2" Apr 30 03:29:52.178785 kubelet[2583]: I0430 03:29:52.178609 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7c3355e6-d6da-4ed9-abd7-0ede97487f11-var-lib-calico\") pod \"calico-node-xrmg2\" (UID: \"7c3355e6-d6da-4ed9-abd7-0ede97487f11\") " pod="calico-system/calico-node-xrmg2" Apr 30 03:29:52.178785 kubelet[2583]: I0430 03:29:52.178625 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7c3355e6-d6da-4ed9-abd7-0ede97487f11-cni-bin-dir\") pod \"calico-node-xrmg2\" (UID: \"7c3355e6-d6da-4ed9-abd7-0ede97487f11\") " pod="calico-system/calico-node-xrmg2" Apr 30 03:29:52.178785 kubelet[2583]: I0430 03:29:52.178640 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7c3355e6-d6da-4ed9-abd7-0ede97487f11-flexvol-driver-host\") pod \"calico-node-xrmg2\" (UID: \"7c3355e6-d6da-4ed9-abd7-0ede97487f11\") " pod="calico-system/calico-node-xrmg2" Apr 30 03:29:52.178785 kubelet[2583]: I0430 03:29:52.178671 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc4qs\" (UniqueName: \"kubernetes.io/projected/7c3355e6-d6da-4ed9-abd7-0ede97487f11-kube-api-access-mc4qs\") pod \"calico-node-xrmg2\" (UID: \"7c3355e6-d6da-4ed9-abd7-0ede97487f11\") " pod="calico-system/calico-node-xrmg2" Apr 30 03:29:52.178785 kubelet[2583]: I0430 03:29:52.178692 2583 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-policysync\") on node \"172.31.17.153\" DevicePath \"\"" Apr 30 03:29:52.178942 kubelet[2583]: I0430 03:29:52.178702 2583 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-var-lib-calico\") on node \"172.31.17.153\" DevicePath \"\"" Apr 30 03:29:52.178942 kubelet[2583]: I0430 03:29:52.178710 2583 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-flexvol-driver-host\") on node \"172.31.17.153\" DevicePath \"\"" Apr 30 03:29:52.178942 kubelet[2583]: I0430 03:29:52.178720 2583 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-cni-net-dir\") on node \"172.31.17.153\" DevicePath \"\"" Apr 30 03:29:52.178942 kubelet[2583]: I0430 03:29:52.178728 2583 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-cni-log-dir\") on node \"172.31.17.153\" DevicePath \"\"" Apr 30 03:29:52.178942 kubelet[2583]: I0430 03:29:52.178735 2583 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-node-certs\") on node \"172.31.17.153\" DevicePath \"\"" Apr 30 03:29:52.178942 kubelet[2583]: I0430 03:29:52.178743 2583 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-lib-modules\") on node \"172.31.17.153\" DevicePath \"\"" Apr 30 03:29:52.178942 kubelet[2583]: I0430 03:29:52.178750 2583 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-cni-bin-dir\") on node \"172.31.17.153\" DevicePath \"\"" Apr 30 03:29:52.178942 kubelet[2583]: I0430 03:29:52.178757 2583 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-xtables-lock\") on node \"172.31.17.153\" DevicePath \"\"" Apr 30 03:29:52.179135 kubelet[2583]: I0430 03:29:52.178764 2583 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-var-run-calico\") on node \"172.31.17.153\" DevicePath \"\"" Apr 30 03:29:52.179135 kubelet[2583]: I0430 03:29:52.178771 2583 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-tigera-ca-bundle\") on node \"172.31.17.153\" DevicePath \"\"" Apr 30 03:29:52.179135 kubelet[2583]: I0430 03:29:52.178779 2583 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-km7gc\" (UniqueName: \"kubernetes.io/projected/949bfb9b-cd44-4ffb-981c-65f67bb0ba84-kube-api-access-km7gc\") on node \"172.31.17.153\" DevicePath \"\"" Apr 30 03:29:52.369259 kubelet[2583]: E0430 03:29:52.369126 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:52.403396 containerd[2102]: time="2025-04-30T03:29:52.403343457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xrmg2,Uid:7c3355e6-d6da-4ed9-abd7-0ede97487f11,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:52.445470 containerd[2102]: time="2025-04-30T03:29:52.445007826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:52.445470 containerd[2102]: time="2025-04-30T03:29:52.445081118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:52.445470 containerd[2102]: time="2025-04-30T03:29:52.445099586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:52.445470 containerd[2102]: time="2025-04-30T03:29:52.445206067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:52.497125 containerd[2102]: time="2025-04-30T03:29:52.497083521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xrmg2,Uid:7c3355e6-d6da-4ed9-abd7-0ede97487f11,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5fac2e6e3822b2a0992f47e2b7de8fb91435c1cfad739218079c16f3ef1212d\"" Apr 30 03:29:52.501895 containerd[2102]: time="2025-04-30T03:29:52.501761933Z" level=info msg="CreateContainer within sandbox \"b5fac2e6e3822b2a0992f47e2b7de8fb91435c1cfad739218079c16f3ef1212d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:29:52.536636 containerd[2102]: time="2025-04-30T03:29:52.536578341Z" level=info msg="CreateContainer within sandbox \"b5fac2e6e3822b2a0992f47e2b7de8fb91435c1cfad739218079c16f3ef1212d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8564cb92d56d9b2e076b99e452b89b14a6b61fe35f87172ce0d9ec05e818deff\"" Apr 30 03:29:52.538214 containerd[2102]: time="2025-04-30T03:29:52.538118386Z" level=info msg="StartContainer for \"8564cb92d56d9b2e076b99e452b89b14a6b61fe35f87172ce0d9ec05e818deff\"" Apr 30 03:29:52.611191 kubelet[2583]: I0430 03:29:52.611006 2583 scope.go:117] "RemoveContainer" containerID="9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976" Apr 30 03:29:52.615172 containerd[2102]: time="2025-04-30T03:29:52.614977960Z" level=info msg="RemoveContainer for \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\"" Apr 30 03:29:52.628104 containerd[2102]: time="2025-04-30T03:29:52.627726533Z" level=info msg="RemoveContainer for \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\" returns successfully" Apr 30 03:29:52.628932 kubelet[2583]: I0430 03:29:52.628775 2583 scope.go:117] "RemoveContainer" containerID="4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438" Apr 30 03:29:52.633138 containerd[2102]: time="2025-04-30T03:29:52.633085971Z" level=info msg="RemoveContainer for \"4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438\"" Apr 30 03:29:52.642051 containerd[2102]: time="2025-04-30T03:29:52.642004681Z" level=info msg="RemoveContainer for \"4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438\" returns successfully" Apr 30 03:29:52.642189 containerd[2102]: time="2025-04-30T03:29:52.642106847Z" level=info msg="StartContainer for \"8564cb92d56d9b2e076b99e452b89b14a6b61fe35f87172ce0d9ec05e818deff\" returns successfully" Apr 30 03:29:52.642465 kubelet[2583]: I0430 03:29:52.642440 2583 scope.go:117] "RemoveContainer" containerID="17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f" Apr 30 03:29:52.648047 containerd[2102]: time="2025-04-30T03:29:52.648002213Z" level=info msg="RemoveContainer for \"17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f\"" Apr 30 03:29:52.652527 containerd[2102]: time="2025-04-30T03:29:52.652484577Z" level=info msg="RemoveContainer for \"17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f\" returns successfully" Apr 30 03:29:52.652794 kubelet[2583]: I0430 03:29:52.652773 2583 scope.go:117] "RemoveContainer" containerID="9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976" Apr 30 03:29:52.653592 containerd[2102]: time="2025-04-30T03:29:52.653528355Z" level=error msg="ContainerStatus for \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\": not found" Apr 30 03:29:52.656626 kubelet[2583]: E0430 03:29:52.656583 2583 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\": not found" containerID="9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976" Apr 30 03:29:52.656751 kubelet[2583]: I0430 03:29:52.656628 2583 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976"} err="failed to get container status \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ab808922535207cdbea6c42b90c26bb22c3c26e0ad0983c955c510e58b78976\": not found" Apr 30 03:29:52.656751 kubelet[2583]: I0430 03:29:52.656676 2583 scope.go:117] "RemoveContainer" containerID="4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438" Apr 30 03:29:52.658025 containerd[2102]: time="2025-04-30T03:29:52.657974006Z" level=error msg="ContainerStatus for \"4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438\": not found" Apr 30 03:29:52.658324 kubelet[2583]: E0430 03:29:52.658209 2583 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438\": not found" containerID="4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438" Apr 30 03:29:52.658324 kubelet[2583]: I0430 03:29:52.658252 2583 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438"} err="failed to get container status \"4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ec7f9dc8301b6204d43eba997076e76f38930a22b325ba132d0b6bb97e8c438\": not found" Apr 30 03:29:52.658324 kubelet[2583]: I0430 03:29:52.658283 2583 scope.go:117] "RemoveContainer" containerID="17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f" Apr 30 03:29:52.660678 containerd[2102]: time="2025-04-30T03:29:52.659925241Z" level=error msg="ContainerStatus for \"17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f\": not found" Apr 30 03:29:52.660778 kubelet[2583]: E0430 03:29:52.660193 2583 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f\": not found" containerID="17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f" Apr 30 03:29:52.660778 kubelet[2583]: I0430 03:29:52.660226 2583 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f"} err="failed to get container status \"17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"17469faf0825f2690aaf6ad723e162705971f5ca25690da86e92792b65e6de3f\": not found" Apr 30 03:29:52.862623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043325525.mount: Deactivated successfully. Apr 30 03:29:52.863280 systemd[1]: var-lib-kubelet-pods-949bfb9b\x2dcd44\x2d4ffb\x2d981c\x2d65f67bb0ba84-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Apr 30 03:29:52.863443 systemd[1]: var-lib-kubelet-pods-949bfb9b\x2dcd44\x2d4ffb\x2d981c\x2d65f67bb0ba84-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkm7gc.mount: Deactivated successfully. Apr 30 03:29:52.863587 systemd[1]: var-lib-kubelet-pods-949bfb9b\x2dcd44\x2d4ffb\x2d981c\x2d65f67bb0ba84-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Apr 30 03:29:52.911872 containerd[2102]: time="2025-04-30T03:29:52.911676830Z" level=info msg="shim disconnected" id=8564cb92d56d9b2e076b99e452b89b14a6b61fe35f87172ce0d9ec05e818deff namespace=k8s.io Apr 30 03:29:52.911872 containerd[2102]: time="2025-04-30T03:29:52.911730343Z" level=warning msg="cleaning up after shim disconnected" id=8564cb92d56d9b2e076b99e452b89b14a6b61fe35f87172ce0d9ec05e818deff namespace=k8s.io Apr 30 03:29:52.911872 containerd[2102]: time="2025-04-30T03:29:52.911738848Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:52.933917 containerd[2102]: time="2025-04-30T03:29:52.933872311Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:29:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:29:53.369710 kubelet[2583]: E0430 03:29:53.369512 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:53.475162 kubelet[2583]: I0430 03:29:53.474977 2583 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949bfb9b-cd44-4ffb-981c-65f67bb0ba84" path="/var/lib/kubelet/pods/949bfb9b-cd44-4ffb-981c-65f67bb0ba84/volumes" Apr 30 03:29:53.637106 containerd[2102]: time="2025-04-30T03:29:53.636223146Z" level=info msg="CreateContainer within sandbox \"b5fac2e6e3822b2a0992f47e2b7de8fb91435c1cfad739218079c16f3ef1212d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:29:53.662764 containerd[2102]: time="2025-04-30T03:29:53.662721497Z" level=info msg="CreateContainer within sandbox \"b5fac2e6e3822b2a0992f47e2b7de8fb91435c1cfad739218079c16f3ef1212d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fc0f84d762c6e70a4e1f79190e214367d39bbc9de437966fc9e8c1923455d6ab\"" Apr 30 03:29:53.663664 containerd[2102]: time="2025-04-30T03:29:53.663603386Z" level=info msg="StartContainer for \"fc0f84d762c6e70a4e1f79190e214367d39bbc9de437966fc9e8c1923455d6ab\"" Apr 30 03:29:53.708461 systemd[1]: run-containerd-runc-k8s.io-fc0f84d762c6e70a4e1f79190e214367d39bbc9de437966fc9e8c1923455d6ab-runc.ownLfK.mount: Deactivated successfully. Apr 30 03:29:53.747069 containerd[2102]: time="2025-04-30T03:29:53.747019705Z" level=info msg="StartContainer for \"fc0f84d762c6e70a4e1f79190e214367d39bbc9de437966fc9e8c1923455d6ab\" returns successfully" Apr 30 03:29:53.788364 containerd[2102]: time="2025-04-30T03:29:53.788307118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:53.796994 containerd[2102]: time="2025-04-30T03:29:53.796914689Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73306276" Apr 30 03:29:53.809482 containerd[2102]: time="2025-04-30T03:29:53.807853633Z" level=info msg="ImageCreate event name:\"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:53.817929 containerd[2102]: time="2025-04-30T03:29:53.817847755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:727fa1dd2cee1ccca9e775e517739b20d5d47bd36b6b5bde8aa708de1348532b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:53.819859 containerd[2102]: time="2025-04-30T03:29:53.819513633Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:727fa1dd2cee1ccca9e775e517739b20d5d47bd36b6b5bde8aa708de1348532b\", size \"73306154\" in 4.295398427s" Apr 30 03:29:53.819859 containerd[2102]: time="2025-04-30T03:29:53.819580702Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\"" Apr 30 03:29:53.822725 containerd[2102]: time="2025-04-30T03:29:53.822523465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:29:53.834527 containerd[2102]: time="2025-04-30T03:29:53.834191086Z" level=info msg="CreateContainer within sandbox \"8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Apr 30 03:29:53.871969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659251722.mount: Deactivated successfully. Apr 30 03:29:53.895527 containerd[2102]: time="2025-04-30T03:29:53.895290962Z" level=info msg="CreateContainer within sandbox \"8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c4db8b1eb862f0c363c90a2151c5a1c0105b264aa2b8d361b0194c4feba19df2\"" Apr 30 03:29:53.896681 containerd[2102]: time="2025-04-30T03:29:53.896626462Z" level=info msg="StartContainer for \"c4db8b1eb862f0c363c90a2151c5a1c0105b264aa2b8d361b0194c4feba19df2\"" Apr 30 03:29:53.960993 containerd[2102]: time="2025-04-30T03:29:53.960946377Z" level=info msg="StartContainer for \"c4db8b1eb862f0c363c90a2151c5a1c0105b264aa2b8d361b0194c4feba19df2\" returns successfully" Apr 30 03:29:54.370725 kubelet[2583]: E0430 03:29:54.370592 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:54.757052 containerd[2102]: time="2025-04-30T03:29:54.756767256Z" level=info msg="shim disconnected" id=fc0f84d762c6e70a4e1f79190e214367d39bbc9de437966fc9e8c1923455d6ab namespace=k8s.io Apr 30 03:29:54.757052 containerd[2102]: time="2025-04-30T03:29:54.756820800Z" level=warning msg="cleaning up after shim disconnected" id=fc0f84d762c6e70a4e1f79190e214367d39bbc9de437966fc9e8c1923455d6ab namespace=k8s.io Apr 30 03:29:54.757052 containerd[2102]: time="2025-04-30T03:29:54.756829214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:54.867789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc0f84d762c6e70a4e1f79190e214367d39bbc9de437966fc9e8c1923455d6ab-rootfs.mount: Deactivated successfully. Apr 30 03:29:55.367044 containerd[2102]: time="2025-04-30T03:29:55.366992989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:55.368459 containerd[2102]: time="2025-04-30T03:29:55.368183880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:29:55.369958 containerd[2102]: time="2025-04-30T03:29:55.369906067Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:55.371676 kubelet[2583]: E0430 03:29:55.371601 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:55.372257 containerd[2102]: time="2025-04-30T03:29:55.372206326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:55.372953 containerd[2102]: time="2025-04-30T03:29:55.372785533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.549885561s" Apr 30 03:29:55.372953 containerd[2102]: time="2025-04-30T03:29:55.372813602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:29:55.379787 containerd[2102]: time="2025-04-30T03:29:55.379758395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:29:55.380750 containerd[2102]: time="2025-04-30T03:29:55.380711638Z" level=info msg="CreateContainer within sandbox \"30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:29:55.398729 containerd[2102]: time="2025-04-30T03:29:55.398664948Z" level=info msg="CreateContainer within sandbox \"30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9aba0afeb223f2339228fdd5033461c6def46e83561a11b2b79352438ef8046c\"" Apr 30 03:29:55.399672 containerd[2102]: time="2025-04-30T03:29:55.399264218Z" level=info msg="StartContainer for \"9aba0afeb223f2339228fdd5033461c6def46e83561a11b2b79352438ef8046c\"" Apr 30 03:29:55.465538 containerd[2102]: time="2025-04-30T03:29:55.465484046Z" level=info msg="StartContainer for \"9aba0afeb223f2339228fdd5033461c6def46e83561a11b2b79352438ef8046c\" returns successfully" Apr 30 03:29:55.651873 containerd[2102]: time="2025-04-30T03:29:55.651712671Z" level=info msg="CreateContainer within sandbox \"b5fac2e6e3822b2a0992f47e2b7de8fb91435c1cfad739218079c16f3ef1212d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:29:55.667664 kubelet[2583]: I0430 03:29:55.665806 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-f6xt7" podStartSLOduration=16.03170859 podStartE2EDuration="21.665786219s" podCreationTimestamp="2025-04-30 03:29:34 +0000 UTC" firstStartedPulling="2025-04-30 03:29:48.187446045 +0000 UTC m=+27.135974894" lastFinishedPulling="2025-04-30 03:29:53.821523673 +0000 UTC m=+32.770052523" observedRunningTime="2025-04-30 03:29:54.6776886 +0000 UTC m=+33.626217472" watchObservedRunningTime="2025-04-30 03:29:55.665786219 +0000 UTC m=+34.614315091" Apr 30 03:29:55.694559 containerd[2102]: time="2025-04-30T03:29:55.694505126Z" level=info msg="CreateContainer within sandbox \"b5fac2e6e3822b2a0992f47e2b7de8fb91435c1cfad739218079c16f3ef1212d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bfd50113b618ec2f366555e02a19ce7bd6c856f9b682444f9853582df06af4d3\"" Apr 30 03:29:55.695211 containerd[2102]: time="2025-04-30T03:29:55.695186223Z" level=info msg="StartContainer for \"bfd50113b618ec2f366555e02a19ce7bd6c856f9b682444f9853582df06af4d3\"" Apr 30 03:29:55.757004 containerd[2102]: time="2025-04-30T03:29:55.756960607Z" level=info msg="StartContainer for \"bfd50113b618ec2f366555e02a19ce7bd6c856f9b682444f9853582df06af4d3\" returns successfully" Apr 30 03:29:56.147983 update_engine[2077]: I20250430 03:29:56.147791 2077 update_attempter.cc:509] Updating boot flags... Apr 30 03:29:56.216747 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (4235) Apr 30 03:29:56.352680 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (4238) Apr 30 03:29:56.372154 kubelet[2583]: E0430 03:29:56.372093 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:56.481116 kubelet[2583]: I0430 03:29:56.476564 2583 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:29:56.481116 kubelet[2583]: I0430 03:29:56.476603 2583 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:29:56.533673 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (4238) Apr 30 03:29:56.688744 kubelet[2583]: I0430 03:29:56.686815 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xrmg2" podStartSLOduration=4.686799926 podStartE2EDuration="4.686799926s" podCreationTimestamp="2025-04-30 03:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:56.686383768 +0000 UTC m=+35.634912648" watchObservedRunningTime="2025-04-30 03:29:56.686799926 +0000 UTC m=+35.635328827" Apr 30 03:29:56.688744 kubelet[2583]: I0430 03:29:56.686956 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-frgxn" podStartSLOduration=28.424528414 podStartE2EDuration="35.686952147s" podCreationTimestamp="2025-04-30 03:29:21 +0000 UTC" firstStartedPulling="2025-04-30 03:29:48.111184439 +0000 UTC m=+27.059713290" lastFinishedPulling="2025-04-30 03:29:55.373608172 +0000 UTC m=+34.322137023" observedRunningTime="2025-04-30 03:29:55.686581437 +0000 UTC m=+34.635110305" watchObservedRunningTime="2025-04-30 03:29:56.686952147 +0000 UTC m=+35.635481018" Apr 30 03:29:57.372642 kubelet[2583]: E0430 03:29:57.372569 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:57.612313 containerd[2102]: time="2025-04-30T03:29:57.612240881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:57.613451 containerd[2102]: time="2025-04-30T03:29:57.613244552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:29:57.614674 containerd[2102]: time="2025-04-30T03:29:57.614595490Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:57.623042 containerd[2102]: time="2025-04-30T03:29:57.622729055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:57.623410 containerd[2102]: time="2025-04-30T03:29:57.623288297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.242773698s" Apr 30 03:29:57.623410 containerd[2102]: time="2025-04-30T03:29:57.623319853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:29:57.638826 containerd[2102]: time="2025-04-30T03:29:57.638640463Z" level=info msg="CreateContainer within sandbox \"0d64a7ab4c8d4cf7b830a396370637d99df8f6d21fb901e7649329823da22b29\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:29:57.656076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923669547.mount: Deactivated successfully. Apr 30 03:29:57.658084 containerd[2102]: time="2025-04-30T03:29:57.658035957Z" level=info msg="CreateContainer within sandbox \"0d64a7ab4c8d4cf7b830a396370637d99df8f6d21fb901e7649329823da22b29\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"10425a95156e19bcd9da2c12234ff507c7a336c096cc6eae41894c99cc73ee0a\"" Apr 30 03:29:57.658881 containerd[2102]: time="2025-04-30T03:29:57.658849291Z" level=info msg="StartContainer for \"10425a95156e19bcd9da2c12234ff507c7a336c096cc6eae41894c99cc73ee0a\"" Apr 30 03:29:57.783604 containerd[2102]: time="2025-04-30T03:29:57.783562266Z" level=info msg="StartContainer for \"10425a95156e19bcd9da2c12234ff507c7a336c096cc6eae41894c99cc73ee0a\" returns successfully" Apr 30 03:29:58.376773 kubelet[2583]: E0430 03:29:58.376701 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:29:58.698823 kubelet[2583]: I0430 03:29:58.698537 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8479c68f45-5lhwj" podStartSLOduration=2.06236474 podStartE2EDuration="7.698519816s" podCreationTimestamp="2025-04-30 03:29:51 +0000 UTC" firstStartedPulling="2025-04-30 03:29:51.988087549 +0000 UTC m=+30.936616402" lastFinishedPulling="2025-04-30 03:29:57.624242622 +0000 UTC m=+36.572771478" observedRunningTime="2025-04-30 03:29:58.69846876 +0000 UTC m=+37.646997632" watchObservedRunningTime="2025-04-30 03:29:58.698519816 +0000 UTC m=+37.647048687" Apr 30 03:29:59.376932 kubelet[2583]: E0430 03:29:59.376878 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:00.169687 (udev-worker)[4782]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:30:00.171774 (udev-worker)[4783]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:30:00.377827 kubelet[2583]: E0430 03:30:00.377780 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:00.512745 kubelet[2583]: I0430 03:30:00.512701 2583 topology_manager.go:215] "Topology Admit Handler" podUID="924014ae-430a-4bab-ae9f-3614db07730f" podNamespace="default" podName="nfs-server-provisioner-0" Apr 30 03:30:00.543785 kubelet[2583]: I0430 03:30:00.543739 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/924014ae-430a-4bab-ae9f-3614db07730f-data\") pod \"nfs-server-provisioner-0\" (UID: \"924014ae-430a-4bab-ae9f-3614db07730f\") " pod="default/nfs-server-provisioner-0" Apr 30 03:30:00.543785 kubelet[2583]: I0430 03:30:00.543793 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w924\" (UniqueName: \"kubernetes.io/projected/924014ae-430a-4bab-ae9f-3614db07730f-kube-api-access-9w924\") pod \"nfs-server-provisioner-0\" (UID: \"924014ae-430a-4bab-ae9f-3614db07730f\") " pod="default/nfs-server-provisioner-0" Apr 30 03:30:00.816818 containerd[2102]: time="2025-04-30T03:30:00.816697662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:924014ae-430a-4bab-ae9f-3614db07730f,Namespace:default,Attempt:0,}" Apr 30 03:30:01.116278 systemd-networkd[1654]: cali60e51b789ff: Link UP Apr 30 03:30:01.116639 systemd-networkd[1654]: cali60e51b789ff: Gained carrier Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:00.966 [INFO][4822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.153-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 924014ae-430a-4bab-ae9f-3614db07730f 1249 0 2025-04-30 03:30:00 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.17.153 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-" Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:00.967 [INFO][4822] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.008 [INFO][4833] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" HandleID="k8s-pod-network.1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" Workload="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.023 [INFO][4833] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" HandleID="k8s-pod-network.1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" Workload="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030c040), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.153", "pod":"nfs-server-provisioner-0", "timestamp":"2025-04-30 03:30:01.008100791 +0000 UTC"}, Hostname:"172.31.17.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.023 [INFO][4833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.023 [INFO][4833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.023 [INFO][4833] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.153' Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.029 [INFO][4833] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" host="172.31.17.153" Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.043 [INFO][4833] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.153" Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.054 [INFO][4833] ipam/ipam.go 489: Trying affinity for 192.168.66.128/26 host="172.31.17.153" Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.059 [INFO][4833] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.128/26 host="172.31.17.153" Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.066 [INFO][4833] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="172.31.17.153" Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.067 [INFO][4833] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" host="172.31.17.153" Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.073 [INFO][4833] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13 Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.083 [INFO][4833] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" host="172.31.17.153" Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.102 [INFO][4833] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.131/26] block=192.168.66.128/26 handle="k8s-pod-network.1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" host="172.31.17.153" Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.102 [INFO][4833] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.131/26] handle="k8s-pod-network.1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" host="172.31.17.153" Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.102 [INFO][4833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:01.228959 containerd[2102]: 2025-04-30 03:30:01.102 [INFO][4833] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.131/26] IPv6=[] ContainerID="1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" HandleID="k8s-pod-network.1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" Workload="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:30:01.230420 containerd[2102]: 2025-04-30 03:30:01.106 [INFO][4822] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"924014ae-430a-4bab-ae9f-3614db07730f", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 30, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:01.230420 containerd[2102]: 2025-04-30 03:30:01.106 [INFO][4822] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.131/32] ContainerID="1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:30:01.230420 containerd[2102]: 2025-04-30 03:30:01.107 [INFO][4822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:30:01.230420 containerd[2102]: 2025-04-30 03:30:01.114 [INFO][4822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:30:01.230875 containerd[2102]: 2025-04-30 03:30:01.115 [INFO][4822] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"924014ae-430a-4bab-ae9f-3614db07730f", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 30, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"f2:59:2f:b3:64:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:01.230875 containerd[2102]: 2025-04-30 03:30:01.207 [INFO][4822] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.153-k8s-nfs--server--provisioner--0-eth0" Apr 30 03:30:01.342272 kubelet[2583]: E0430 03:30:01.342142 2583 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:01.348335 containerd[2102]: time="2025-04-30T03:30:01.347996661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:01.348335 containerd[2102]: time="2025-04-30T03:30:01.348272239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:01.348335 containerd[2102]: time="2025-04-30T03:30:01.348293522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:01.349194 containerd[2102]: time="2025-04-30T03:30:01.348414534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:01.382043 kubelet[2583]: E0430 03:30:01.381901 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:01.528137 containerd[2102]: time="2025-04-30T03:30:01.528090833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:924014ae-430a-4bab-ae9f-3614db07730f,Namespace:default,Attempt:0,} returns sandbox id \"1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13\"" Apr 30 03:30:01.532016 containerd[2102]: time="2025-04-30T03:30:01.531908566Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Apr 30 03:30:02.427927 kubelet[2583]: E0430 03:30:02.424722 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:02.760140 systemd-networkd[1654]: cali60e51b789ff: Gained IPv6LL Apr 30 03:30:03.425695 kubelet[2583]: E0430 03:30:03.425423 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:04.426925 kubelet[2583]: E0430 03:30:04.426878 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:04.605530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552200348.mount: Deactivated successfully. Apr 30 03:30:05.371544 ntpd[2056]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%10]:123 Apr 30 03:30:05.372087 ntpd[2056]: 30 Apr 03:30:05 ntpd[2056]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%10]:123 Apr 30 03:30:05.427806 kubelet[2583]: E0430 03:30:05.427725 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:06.428473 kubelet[2583]: E0430 03:30:06.428435 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:06.730672 containerd[2102]: time="2025-04-30T03:30:06.730607510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:06.732526 containerd[2102]: time="2025-04-30T03:30:06.732465487Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Apr 30 03:30:06.733704 containerd[2102]: time="2025-04-30T03:30:06.733633612Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:06.736497 containerd[2102]: time="2025-04-30T03:30:06.736467766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:06.737615 containerd[2102]: time="2025-04-30T03:30:06.737306341Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.205332518s" Apr 30 03:30:06.737615 containerd[2102]: time="2025-04-30T03:30:06.737339669Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Apr 30 03:30:06.739846 containerd[2102]: time="2025-04-30T03:30:06.739777231Z" level=info msg="CreateContainer within sandbox \"1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Apr 30 03:30:06.757224 containerd[2102]: time="2025-04-30T03:30:06.757160143Z" level=info msg="CreateContainer within sandbox \"1453e82cc539840c2a36ceaa00a90ad45ccf519797030330246bd7de02b54f13\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f410bf9a9bf975818a5e4062cfca380781686207e96c63dd1db77c2d4a062145\"" Apr 30 03:30:06.759181 containerd[2102]: time="2025-04-30T03:30:06.759144685Z" level=info msg="StartContainer for \"f410bf9a9bf975818a5e4062cfca380781686207e96c63dd1db77c2d4a062145\"" Apr 30 03:30:06.825487 containerd[2102]: time="2025-04-30T03:30:06.825439342Z" level=info msg="StartContainer for \"f410bf9a9bf975818a5e4062cfca380781686207e96c63dd1db77c2d4a062145\" returns successfully" Apr 30 03:30:07.429432 kubelet[2583]: E0430 03:30:07.429312 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:07.736736 kubelet[2583]: I0430 03:30:07.736639 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.52965617 podStartE2EDuration="7.736617429s" podCreationTimestamp="2025-04-30 03:30:00 +0000 UTC" firstStartedPulling="2025-04-30 03:30:01.531150953 +0000 UTC m=+40.479679817" lastFinishedPulling="2025-04-30 03:30:06.738112223 +0000 UTC m=+45.686641076" observedRunningTime="2025-04-30 03:30:07.73628929 +0000 UTC m=+46.684818164" watchObservedRunningTime="2025-04-30 03:30:07.736617429 +0000 UTC m=+46.685146318" Apr 30 03:30:08.429960 kubelet[2583]: E0430 03:30:08.429898 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:09.430208 kubelet[2583]: E0430 03:30:09.430156 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:09.527099 kubelet[2583]: I0430 03:30:09.527045 2583 topology_manager.go:215] "Topology Admit Handler" podUID="a4327373-6a4b-48f5-aa1f-eeb5aea1fec0" podNamespace="calico-apiserver" podName="calico-apiserver-79d7797bfd-bg8xh" Apr 30 03:30:09.611422 kubelet[2583]: I0430 03:30:09.611233 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlm2w\" (UniqueName: \"kubernetes.io/projected/a4327373-6a4b-48f5-aa1f-eeb5aea1fec0-kube-api-access-vlm2w\") pod \"calico-apiserver-79d7797bfd-bg8xh\" (UID: \"a4327373-6a4b-48f5-aa1f-eeb5aea1fec0\") " pod="calico-apiserver/calico-apiserver-79d7797bfd-bg8xh" Apr 30 03:30:09.613054 kubelet[2583]: I0430 03:30:09.612550 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4327373-6a4b-48f5-aa1f-eeb5aea1fec0-calico-apiserver-certs\") pod \"calico-apiserver-79d7797bfd-bg8xh\" (UID: \"a4327373-6a4b-48f5-aa1f-eeb5aea1fec0\") " pod="calico-apiserver/calico-apiserver-79d7797bfd-bg8xh" Apr 30 03:30:09.831738 containerd[2102]: time="2025-04-30T03:30:09.831693711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7797bfd-bg8xh,Uid:a4327373-6a4b-48f5-aa1f-eeb5aea1fec0,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:30:10.025036 systemd-networkd[1654]: calid83f408e5c8: Link UP Apr 30 03:30:10.027857 systemd-networkd[1654]: calid83f408e5c8: Gained carrier Apr 30 03:30:10.030505 (udev-worker)[5016]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.917 [INFO][4996] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0 calico-apiserver-79d7797bfd- calico-apiserver a4327373-6a4b-48f5-aa1f-eeb5aea1fec0 1319 0 2025-04-30 03:30:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79d7797bfd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172.31.17.153 calico-apiserver-79d7797bfd-bg8xh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid83f408e5c8 [] []}} ContainerID="7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-bg8xh" WorkloadEndpoint="172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-" Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.917 [INFO][4996] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-bg8xh" WorkloadEndpoint="172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0" Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.951 [INFO][5008] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" HandleID="k8s-pod-network.7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" Workload="172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0" Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.970 [INFO][5008] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" HandleID="k8s-pod-network.7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" Workload="172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000384aa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172.31.17.153", "pod":"calico-apiserver-79d7797bfd-bg8xh", "timestamp":"2025-04-30 03:30:09.951923389 +0000 UTC"}, Hostname:"172.31.17.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.970 [INFO][5008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.970 [INFO][5008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.970 [INFO][5008] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.153' Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.974 [INFO][5008] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" host="172.31.17.153" Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.981 [INFO][5008] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.153" Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.989 [INFO][5008] ipam/ipam.go 489: Trying affinity for 192.168.66.128/26 host="172.31.17.153" Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.992 [INFO][5008] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.128/26 host="172.31.17.153" Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.996 [INFO][5008] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="172.31.17.153" Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.996 [INFO][5008] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" host="172.31.17.153" Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:09.999 [INFO][5008] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705 Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:10.007 [INFO][5008] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" host="172.31.17.153" Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:10.020 [INFO][5008] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.132/26] block=192.168.66.128/26 handle="k8s-pod-network.7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" host="172.31.17.153" Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:10.020 [INFO][5008] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.132/26] handle="k8s-pod-network.7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" host="172.31.17.153" Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:10.020 [INFO][5008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:10.044149 containerd[2102]: 2025-04-30 03:30:10.020 [INFO][5008] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.132/26] IPv6=[] ContainerID="7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" HandleID="k8s-pod-network.7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" Workload="172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0" Apr 30 03:30:10.044773 containerd[2102]: 2025-04-30 03:30:10.023 [INFO][4996] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-bg8xh" WorkloadEndpoint="172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0", GenerateName:"calico-apiserver-79d7797bfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4327373-6a4b-48f5-aa1f-eeb5aea1fec0", ResourceVersion:"1319", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7797bfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"", Pod:"calico-apiserver-79d7797bfd-bg8xh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid83f408e5c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:10.044773 containerd[2102]: 2025-04-30 03:30:10.023 [INFO][4996] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.132/32] ContainerID="7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-bg8xh" WorkloadEndpoint="172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0" Apr 30 03:30:10.044773 containerd[2102]: 2025-04-30 03:30:10.023 [INFO][4996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid83f408e5c8 ContainerID="7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-bg8xh" WorkloadEndpoint="172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0" Apr 30 03:30:10.044773 containerd[2102]: 2025-04-30 03:30:10.025 [INFO][4996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-bg8xh" WorkloadEndpoint="172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0" Apr 30 03:30:10.044773 containerd[2102]: 2025-04-30 03:30:10.026 [INFO][4996] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-bg8xh" WorkloadEndpoint="172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0", GenerateName:"calico-apiserver-79d7797bfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4327373-6a4b-48f5-aa1f-eeb5aea1fec0", ResourceVersion:"1319", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7797bfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705", Pod:"calico-apiserver-79d7797bfd-bg8xh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid83f408e5c8", MAC:"a2:83:a4:71:15:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:10.044773 containerd[2102]: 2025-04-30 03:30:10.042 [INFO][4996] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705" Namespace="calico-apiserver" Pod="calico-apiserver-79d7797bfd-bg8xh" WorkloadEndpoint="172.31.17.153-k8s-calico--apiserver--79d7797bfd--bg8xh-eth0" Apr 30 03:30:10.105387 containerd[2102]: time="2025-04-30T03:30:10.104138078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:10.105387 containerd[2102]: time="2025-04-30T03:30:10.104202476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:10.105387 containerd[2102]: time="2025-04-30T03:30:10.104230415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:10.105387 containerd[2102]: time="2025-04-30T03:30:10.104334416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:10.171632 containerd[2102]: time="2025-04-30T03:30:10.171535771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7797bfd-bg8xh,Uid:a4327373-6a4b-48f5-aa1f-eeb5aea1fec0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705\"" Apr 30 03:30:10.173603 containerd[2102]: time="2025-04-30T03:30:10.173559366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:30:10.431763 kubelet[2583]: E0430 03:30:10.430936 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:11.403227 systemd-networkd[1654]: calid83f408e5c8: Gained IPv6LL Apr 30 03:30:11.432162 kubelet[2583]: E0430 03:30:11.432112 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:12.432297 kubelet[2583]: E0430 03:30:12.432245 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:12.913452 containerd[2102]: time="2025-04-30T03:30:12.913336402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:12.914877 containerd[2102]: time="2025-04-30T03:30:12.914762611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:30:12.916275 containerd[2102]: time="2025-04-30T03:30:12.916130851Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:12.922545 containerd[2102]: time="2025-04-30T03:30:12.922501049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:12.923320 containerd[2102]: time="2025-04-30T03:30:12.923138145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.749544133s" Apr 30 03:30:12.923320 containerd[2102]: time="2025-04-30T03:30:12.923173723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:30:12.925932 containerd[2102]: time="2025-04-30T03:30:12.925821130Z" level=info msg="CreateContainer within sandbox \"7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:30:12.941121 containerd[2102]: time="2025-04-30T03:30:12.941078935Z" level=info msg="CreateContainer within sandbox \"7c228e153da2252dae8dcfbd6797a6ec48f810ab275b8f6be93471fad3579705\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3653ca45dd744121e6a8efdd40a1da80ac816c0ec689bf615de81759e87fa490\"" Apr 30 03:30:12.941805 containerd[2102]: time="2025-04-30T03:30:12.941781938Z" level=info msg="StartContainer for \"3653ca45dd744121e6a8efdd40a1da80ac816c0ec689bf615de81759e87fa490\"" Apr 30 03:30:13.027737 containerd[2102]: time="2025-04-30T03:30:13.027680448Z" level=info msg="StartContainer for \"3653ca45dd744121e6a8efdd40a1da80ac816c0ec689bf615de81759e87fa490\" returns successfully" Apr 30 03:30:13.433608 kubelet[2583]: E0430 03:30:13.433386 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:13.763529 kubelet[2583]: I0430 03:30:13.763366 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79d7797bfd-bg8xh" podStartSLOduration=2.01190475 podStartE2EDuration="4.763349415s" podCreationTimestamp="2025-04-30 03:30:09 +0000 UTC" firstStartedPulling="2025-04-30 03:30:10.173110704 +0000 UTC m=+49.121639556" lastFinishedPulling="2025-04-30 03:30:12.924555368 +0000 UTC m=+51.873084221" observedRunningTime="2025-04-30 03:30:13.76236356 +0000 UTC m=+52.710892442" watchObservedRunningTime="2025-04-30 03:30:13.763349415 +0000 UTC m=+52.711878286" Apr 30 03:30:14.371546 ntpd[2056]: Listen normally on 11 calid83f408e5c8 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 30 03:30:14.372019 ntpd[2056]: 30 Apr 03:30:14 ntpd[2056]: Listen normally on 11 calid83f408e5c8 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 30 03:30:14.433870 kubelet[2583]: E0430 03:30:14.433802 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:14.723513 kubelet[2583]: I0430 03:30:14.723465 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:15.434152 kubelet[2583]: E0430 03:30:15.434111 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:16.434717 kubelet[2583]: E0430 03:30:16.434672 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:17.434872 kubelet[2583]: E0430 03:30:17.434798 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:18.435040 kubelet[2583]: E0430 03:30:18.434998 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:19.435624 kubelet[2583]: E0430 03:30:19.435521 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:20.436006 kubelet[2583]: E0430 03:30:20.435953 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:21.342120 kubelet[2583]: E0430 03:30:21.342051 2583 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:21.385286 containerd[2102]: time="2025-04-30T03:30:21.385039314Z" level=info msg="StopPodSandbox for \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\"" Apr 30 03:30:21.385286 containerd[2102]: time="2025-04-30T03:30:21.385158048Z" level=info msg="TearDown network for sandbox \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\" successfully" Apr 30 03:30:21.385286 containerd[2102]: time="2025-04-30T03:30:21.385172831Z" level=info msg="StopPodSandbox for \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\" returns successfully" Apr 30 03:30:21.436281 kubelet[2583]: E0430 03:30:21.436240 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:21.443790 containerd[2102]: time="2025-04-30T03:30:21.443195797Z" level=info msg="RemovePodSandbox for \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\"" Apr 30 03:30:21.443790 containerd[2102]: time="2025-04-30T03:30:21.443243246Z" level=info msg="Forcibly stopping sandbox \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\"" Apr 30 03:30:21.443790 containerd[2102]: time="2025-04-30T03:30:21.443313688Z" level=info msg="TearDown network for sandbox \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\" successfully" Apr 30 03:30:21.466535 containerd[2102]: time="2025-04-30T03:30:21.466165602Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:21.466535 containerd[2102]: time="2025-04-30T03:30:21.466388829Z" level=info msg="RemovePodSandbox \"782f2e27b320c1d5d9068f018ea7447b2bbf0754ef4342535004609e8aeee895\" returns successfully" Apr 30 03:30:21.467168 containerd[2102]: time="2025-04-30T03:30:21.467128222Z" level=info msg="StopPodSandbox for \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\"" Apr 30 03:30:21.620078 containerd[2102]: 2025-04-30 03:30:21.561 [WARNING][5165] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-csi--node--driver--frgxn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69369dcc-5bb6-4835-83b2-b49f1ef80401", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20", Pod:"csi-node-driver-frgxn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2c2a90f8240", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:21.620078 containerd[2102]: 2025-04-30 03:30:21.561 [INFO][5165] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:30:21.620078 containerd[2102]: 2025-04-30 03:30:21.561 [INFO][5165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" iface="eth0" netns="" Apr 30 03:30:21.620078 containerd[2102]: 2025-04-30 03:30:21.561 [INFO][5165] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:30:21.620078 containerd[2102]: 2025-04-30 03:30:21.561 [INFO][5165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:30:21.620078 containerd[2102]: 2025-04-30 03:30:21.605 [INFO][5172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" HandleID="k8s-pod-network.235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Workload="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:30:21.620078 containerd[2102]: 2025-04-30 03:30:21.605 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:21.620078 containerd[2102]: 2025-04-30 03:30:21.605 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:21.620078 containerd[2102]: 2025-04-30 03:30:21.613 [WARNING][5172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" HandleID="k8s-pod-network.235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Workload="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:30:21.620078 containerd[2102]: 2025-04-30 03:30:21.613 [INFO][5172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" HandleID="k8s-pod-network.235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Workload="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:30:21.620078 containerd[2102]: 2025-04-30 03:30:21.616 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:21.620078 containerd[2102]: 2025-04-30 03:30:21.618 [INFO][5165] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:30:21.620078 containerd[2102]: time="2025-04-30T03:30:21.619984755Z" level=info msg="TearDown network for sandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\" successfully" Apr 30 03:30:21.620078 containerd[2102]: time="2025-04-30T03:30:21.620015063Z" level=info msg="StopPodSandbox for \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\" returns successfully" Apr 30 03:30:21.622155 containerd[2102]: time="2025-04-30T03:30:21.620515663Z" level=info msg="RemovePodSandbox for \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\"" Apr 30 03:30:21.622155 containerd[2102]: time="2025-04-30T03:30:21.620548379Z" level=info msg="Forcibly stopping sandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\"" Apr 30 03:30:21.724931 containerd[2102]: 2025-04-30 03:30:21.681 [WARNING][5190] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-csi--node--driver--frgxn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69369dcc-5bb6-4835-83b2-b49f1ef80401", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"30643ed60325618dccf566a25794e41027fcaf47fb6815cfc8e723ac39849c20", Pod:"csi-node-driver-frgxn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2c2a90f8240", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:21.724931 containerd[2102]: 2025-04-30 03:30:21.682 [INFO][5190] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:30:21.724931 containerd[2102]: 2025-04-30 03:30:21.682 [INFO][5190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" iface="eth0" netns="" Apr 30 03:30:21.724931 containerd[2102]: 2025-04-30 03:30:21.682 [INFO][5190] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:30:21.724931 containerd[2102]: 2025-04-30 03:30:21.682 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:30:21.724931 containerd[2102]: 2025-04-30 03:30:21.709 [INFO][5197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" HandleID="k8s-pod-network.235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Workload="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:30:21.724931 containerd[2102]: 2025-04-30 03:30:21.709 [INFO][5197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:21.724931 containerd[2102]: 2025-04-30 03:30:21.710 [INFO][5197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:21.724931 containerd[2102]: 2025-04-30 03:30:21.717 [WARNING][5197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" HandleID="k8s-pod-network.235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Workload="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:30:21.724931 containerd[2102]: 2025-04-30 03:30:21.717 [INFO][5197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" HandleID="k8s-pod-network.235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Workload="172.31.17.153-k8s-csi--node--driver--frgxn-eth0" Apr 30 03:30:21.724931 containerd[2102]: 2025-04-30 03:30:21.721 [INFO][5197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:21.724931 containerd[2102]: 2025-04-30 03:30:21.722 [INFO][5190] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746" Apr 30 03:30:21.725511 containerd[2102]: time="2025-04-30T03:30:21.724958029Z" level=info msg="TearDown network for sandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\" successfully" Apr 30 03:30:21.728311 containerd[2102]: time="2025-04-30T03:30:21.728261049Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:21.728471 containerd[2102]: time="2025-04-30T03:30:21.728334201Z" level=info msg="RemovePodSandbox \"235228a9dbabd0b60901489ff5216089a2eb746fa2118a71b22c39163b9e0746\" returns successfully" Apr 30 03:30:21.728910 containerd[2102]: time="2025-04-30T03:30:21.728883025Z" level=info msg="StopPodSandbox for \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\"" Apr 30 03:30:21.819194 containerd[2102]: 2025-04-30 03:30:21.779 [WARNING][5215] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"abcfaef6-2ff2-40a4-acd6-df403883e1ea", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad", Pod:"nginx-deployment-85f456d6dd-f6xt7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif3b2e0f6a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:21.819194 containerd[2102]: 2025-04-30 03:30:21.779 [INFO][5215] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:30:21.819194 containerd[2102]: 2025-04-30 03:30:21.779 [INFO][5215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" iface="eth0" netns="" Apr 30 03:30:21.819194 containerd[2102]: 2025-04-30 03:30:21.779 [INFO][5215] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:30:21.819194 containerd[2102]: 2025-04-30 03:30:21.779 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:30:21.819194 containerd[2102]: 2025-04-30 03:30:21.805 [INFO][5222] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" HandleID="k8s-pod-network.dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Workload="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:30:21.819194 containerd[2102]: 2025-04-30 03:30:21.805 [INFO][5222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:21.819194 containerd[2102]: 2025-04-30 03:30:21.805 [INFO][5222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:21.819194 containerd[2102]: 2025-04-30 03:30:21.814 [WARNING][5222] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" HandleID="k8s-pod-network.dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Workload="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:30:21.819194 containerd[2102]: 2025-04-30 03:30:21.814 [INFO][5222] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" HandleID="k8s-pod-network.dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Workload="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:30:21.819194 containerd[2102]: 2025-04-30 03:30:21.816 [INFO][5222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:21.819194 containerd[2102]: 2025-04-30 03:30:21.817 [INFO][5215] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:30:21.819691 containerd[2102]: time="2025-04-30T03:30:21.819237068Z" level=info msg="TearDown network for sandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\" successfully" Apr 30 03:30:21.819691 containerd[2102]: time="2025-04-30T03:30:21.819263280Z" level=info msg="StopPodSandbox for \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\" returns successfully" Apr 30 03:30:21.819924 containerd[2102]: time="2025-04-30T03:30:21.819899767Z" level=info msg="RemovePodSandbox for \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\"" Apr 30 03:30:21.819924 containerd[2102]: time="2025-04-30T03:30:21.819935286Z" level=info msg="Forcibly stopping sandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\"" Apr 30 03:30:21.914907 containerd[2102]: 2025-04-30 03:30:21.877 [WARNING][5240] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"abcfaef6-2ff2-40a4-acd6-df403883e1ea", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"8e286e399d251d2289f2234ea0ad12052d202242b708d1e24eb2b2527e5150ad", Pod:"nginx-deployment-85f456d6dd-f6xt7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif3b2e0f6a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:21.914907 containerd[2102]: 2025-04-30 03:30:21.877 [INFO][5240] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:30:21.914907 containerd[2102]: 2025-04-30 03:30:21.877 [INFO][5240] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" iface="eth0" netns="" Apr 30 03:30:21.914907 containerd[2102]: 2025-04-30 03:30:21.877 [INFO][5240] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:30:21.914907 containerd[2102]: 2025-04-30 03:30:21.877 [INFO][5240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:30:21.914907 containerd[2102]: 2025-04-30 03:30:21.900 [INFO][5247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" HandleID="k8s-pod-network.dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Workload="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:30:21.914907 containerd[2102]: 2025-04-30 03:30:21.900 [INFO][5247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:21.914907 containerd[2102]: 2025-04-30 03:30:21.900 [INFO][5247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:21.914907 containerd[2102]: 2025-04-30 03:30:21.909 [WARNING][5247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" HandleID="k8s-pod-network.dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Workload="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:30:21.914907 containerd[2102]: 2025-04-30 03:30:21.910 [INFO][5247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" HandleID="k8s-pod-network.dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Workload="172.31.17.153-k8s-nginx--deployment--85f456d6dd--f6xt7-eth0" Apr 30 03:30:21.914907 containerd[2102]: 2025-04-30 03:30:21.912 [INFO][5247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:21.914907 containerd[2102]: 2025-04-30 03:30:21.913 [INFO][5240] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d" Apr 30 03:30:21.914907 containerd[2102]: time="2025-04-30T03:30:21.914893271Z" level=info msg="TearDown network for sandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\" successfully" Apr 30 03:30:21.918445 containerd[2102]: time="2025-04-30T03:30:21.918368431Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:30:21.918445 containerd[2102]: time="2025-04-30T03:30:21.918426783Z" level=info msg="RemovePodSandbox \"dcb9b47039f58b69af698e41ae05a107098ad3c7511b55bc75eda71a2617395d\" returns successfully" Apr 30 03:30:22.436363 kubelet[2583]: E0430 03:30:22.436330 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:23.438006 kubelet[2583]: E0430 03:30:23.437966 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:24.438778 kubelet[2583]: E0430 03:30:24.438715 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:25.439757 kubelet[2583]: E0430 03:30:25.439697 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:26.440403 kubelet[2583]: E0430 03:30:26.440347 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:27.441104 kubelet[2583]: E0430 03:30:27.441031 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:28.442364 kubelet[2583]: E0430 03:30:28.442189 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:29.443033 kubelet[2583]: E0430 03:30:29.442979 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:30.443623 kubelet[2583]: E0430 03:30:30.443570 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:31.444749 kubelet[2583]: E0430 03:30:31.444685 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:32.072602 kubelet[2583]: I0430 03:30:32.071339 2583 topology_manager.go:215] "Topology Admit Handler" podUID="dad5b3e3-5f08-489d-ad3c-d3535455d967" podNamespace="default" podName="test-pod-1" Apr 30 03:30:32.178843 kubelet[2583]: I0430 03:30:32.178584 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-16b2a8d6-b03a-4bc1-ab56-967432c93ae0\" (UniqueName: \"kubernetes.io/nfs/dad5b3e3-5f08-489d-ad3c-d3535455d967-pvc-16b2a8d6-b03a-4bc1-ab56-967432c93ae0\") pod \"test-pod-1\" (UID: \"dad5b3e3-5f08-489d-ad3c-d3535455d967\") " pod="default/test-pod-1" Apr 30 03:30:32.178843 kubelet[2583]: I0430 03:30:32.178640 2583 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2p7d\" (UniqueName: \"kubernetes.io/projected/dad5b3e3-5f08-489d-ad3c-d3535455d967-kube-api-access-l2p7d\") pod \"test-pod-1\" (UID: \"dad5b3e3-5f08-489d-ad3c-d3535455d967\") " pod="default/test-pod-1" Apr 30 03:30:32.333749 kernel: FS-Cache: Loaded Apr 30 03:30:32.408761 kernel: RPC: Registered named UNIX socket transport module. Apr 30 03:30:32.408868 kernel: RPC: Registered udp transport module. Apr 30 03:30:32.408885 kernel: RPC: Registered tcp transport module. Apr 30 03:30:32.408903 kernel: RPC: Registered tcp-with-tls transport module. Apr 30 03:30:32.408917 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Apr 30 03:30:32.446117 kubelet[2583]: E0430 03:30:32.445201 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:32.725756 kernel: NFS: Registering the id_resolver key type Apr 30 03:30:32.725924 kernel: Key type id_resolver registered Apr 30 03:30:32.726778 kernel: Key type id_legacy registered Apr 30 03:30:32.764731 nfsidmap[5297]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Apr 30 03:30:32.769010 nfsidmap[5298]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Apr 30 03:30:32.975316 containerd[2102]: time="2025-04-30T03:30:32.975265006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:dad5b3e3-5f08-489d-ad3c-d3535455d967,Namespace:default,Attempt:0,}" Apr 30 03:30:33.129862 (udev-worker)[5290]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:30:33.131013 systemd-networkd[1654]: cali5ec59c6bf6e: Link UP Apr 30 03:30:33.131298 systemd-networkd[1654]: cali5ec59c6bf6e: Gained carrier Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.031 [INFO][5299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.153-k8s-test--pod--1-eth0 default dad5b3e3-5f08-489d-ad3c-d3535455d967 1439 0 2025-04-30 03:30:03 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.17.153 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.032 [INFO][5299] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.064 [INFO][5311] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" HandleID="k8s-pod-network.cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" Workload="172.31.17.153-k8s-test--pod--1-eth0" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.075 [INFO][5311] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" HandleID="k8s-pod-network.cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" Workload="172.31.17.153-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000279610), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.153", "pod":"test-pod-1", "timestamp":"2025-04-30 03:30:33.064663353 +0000 UTC"}, Hostname:"172.31.17.153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.075 [INFO][5311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.075 [INFO][5311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.075 [INFO][5311] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.153' Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.077 [INFO][5311] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" host="172.31.17.153" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.082 [INFO][5311] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.153" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.090 [INFO][5311] ipam/ipam.go 489: Trying affinity for 192.168.66.128/26 host="172.31.17.153" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.094 [INFO][5311] ipam/ipam.go 155: Attempting to load block cidr=192.168.66.128/26 host="172.31.17.153" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.099 [INFO][5311] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="172.31.17.153" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.099 [INFO][5311] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" host="172.31.17.153" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.101 [INFO][5311] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140 Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.108 [INFO][5311] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" host="172.31.17.153" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.125 [INFO][5311] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.66.133/26] block=192.168.66.128/26 handle="k8s-pod-network.cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" host="172.31.17.153" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.125 [INFO][5311] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.66.133/26] handle="k8s-pod-network.cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" host="172.31.17.153" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.125 [INFO][5311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.125 [INFO][5311] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.133/26] IPv6=[] ContainerID="cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" HandleID="k8s-pod-network.cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" Workload="172.31.17.153-k8s-test--pod--1-eth0" Apr 30 03:30:33.165411 containerd[2102]: 2025-04-30 03:30:33.127 [INFO][5299] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"dad5b3e3-5f08-489d-ad3c-d3535455d967", ResourceVersion:"1439", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 30, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:33.166697 containerd[2102]: 2025-04-30 03:30:33.127 [INFO][5299] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.66.133/32] ContainerID="cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" Apr 30 03:30:33.166697 containerd[2102]: 2025-04-30 03:30:33.127 [INFO][5299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" Apr 30 03:30:33.166697 containerd[2102]: 2025-04-30 03:30:33.133 [INFO][5299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" Apr 30 03:30:33.166697 containerd[2102]: 2025-04-30 03:30:33.133 [INFO][5299] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.153-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"dad5b3e3-5f08-489d-ad3c-d3535455d967", ResourceVersion:"1439", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 30, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.153", ContainerID:"cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.66.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"66:bb:42:01:e1:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:30:33.166697 containerd[2102]: 2025-04-30 03:30:33.159 [INFO][5299] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.153-k8s-test--pod--1-eth0" Apr 30 03:30:33.194486 containerd[2102]: time="2025-04-30T03:30:33.194143430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:30:33.194486 containerd[2102]: time="2025-04-30T03:30:33.194198336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:30:33.194486 containerd[2102]: time="2025-04-30T03:30:33.194213377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:33.195892 containerd[2102]: time="2025-04-30T03:30:33.194471377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:30:33.256506 containerd[2102]: time="2025-04-30T03:30:33.256468586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:dad5b3e3-5f08-489d-ad3c-d3535455d967,Namespace:default,Attempt:0,} returns sandbox id \"cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140\"" Apr 30 03:30:33.258750 containerd[2102]: time="2025-04-30T03:30:33.258497203Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Apr 30 03:30:33.446430 kubelet[2583]: E0430 03:30:33.446382 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:33.528742 containerd[2102]: time="2025-04-30T03:30:33.528695665Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:33.530177 containerd[2102]: time="2025-04-30T03:30:33.529610994Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Apr 30 03:30:33.532527 containerd[2102]: time="2025-04-30T03:30:33.532491387Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:727fa1dd2cee1ccca9e775e517739b20d5d47bd36b6b5bde8aa708de1348532b\", size \"73306154\" in 273.953751ms" Apr 30 03:30:33.532527 containerd[2102]: time="2025-04-30T03:30:33.532525508Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\"" Apr 30 03:30:33.534589 containerd[2102]: time="2025-04-30T03:30:33.534524247Z" level=info msg="CreateContainer within sandbox \"cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140\" for container &ContainerMetadata{Name:test,Attempt:0,}" Apr 30 03:30:33.547743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585684187.mount: Deactivated successfully. Apr 30 03:30:33.555767 containerd[2102]: time="2025-04-30T03:30:33.555721195Z" level=info msg="CreateContainer within sandbox \"cf7bf51d0009e8cf23f548bd6f27a97843a5a6f58c08eed38e36b25ecbcd2140\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2bd5db07c6a9af2a444eb31bf07c4b5a686fe0fc04371931f7af18476dfad3f1\"" Apr 30 03:30:33.556456 containerd[2102]: time="2025-04-30T03:30:33.556423716Z" level=info msg="StartContainer for \"2bd5db07c6a9af2a444eb31bf07c4b5a686fe0fc04371931f7af18476dfad3f1\"" Apr 30 03:30:33.622382 containerd[2102]: time="2025-04-30T03:30:33.622152967Z" level=info msg="StartContainer for \"2bd5db07c6a9af2a444eb31bf07c4b5a686fe0fc04371931f7af18476dfad3f1\" returns successfully" Apr 30 03:30:34.446903 kubelet[2583]: E0430 03:30:34.446861 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:34.568007 systemd-networkd[1654]: cali5ec59c6bf6e: Gained IPv6LL Apr 30 03:30:35.447246 kubelet[2583]: E0430 03:30:35.447197 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:36.448014 kubelet[2583]: E0430 03:30:36.447955 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:37.371552 ntpd[2056]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%12]:123 Apr 30 03:30:37.371979 ntpd[2056]: 30 Apr 03:30:37 ntpd[2056]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%12]:123 Apr 30 03:30:37.448480 kubelet[2583]: E0430 03:30:37.448437 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:38.449035 kubelet[2583]: E0430 03:30:38.448929 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:39.449695 kubelet[2583]: E0430 03:30:39.449625 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:40.450305 kubelet[2583]: E0430 03:30:40.450192 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:41.342086 kubelet[2583]: E0430 03:30:41.342032 2583 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:41.451076 kubelet[2583]: E0430 03:30:41.451044 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:42.451412 kubelet[2583]: E0430 03:30:42.451359 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:43.452522 kubelet[2583]: E0430 03:30:43.452441 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:44.452906 kubelet[2583]: E0430 03:30:44.452832 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:45.453140 kubelet[2583]: I0430 03:30:45.452906 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:45.453687 kubelet[2583]: E0430 03:30:45.453459 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:46.453613 kubelet[2583]: E0430 03:30:46.453566 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:47.454132 kubelet[2583]: E0430 03:30:47.454080 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:48.454516 kubelet[2583]: E0430 03:30:48.454458 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:49.455182 kubelet[2583]: E0430 03:30:49.455126 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:50.455987 kubelet[2583]: E0430 03:30:50.455838 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:51.456952 kubelet[2583]: E0430 03:30:51.456914 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:52.457855 kubelet[2583]: E0430 03:30:52.457796 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:53.042069 kubelet[2583]: E0430 03:30:53.041997 2583 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 30 03:30:53.458511 kubelet[2583]: E0430 03:30:53.458453 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:54.460050 kubelet[2583]: E0430 03:30:54.459991 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:55.460984 kubelet[2583]: E0430 03:30:55.460939 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:56.461422 kubelet[2583]: E0430 03:30:56.461370 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:57.462513 kubelet[2583]: E0430 03:30:57.462452 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:58.463244 kubelet[2583]: E0430 03:30:58.463184 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:30:59.463574 kubelet[2583]: E0430 03:30:59.463517 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:00.464212 kubelet[2583]: E0430 03:31:00.464157 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:01.341577 kubelet[2583]: E0430 03:31:01.341500 2583 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:01.464876 kubelet[2583]: E0430 03:31:01.464846 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:02.466671 kubelet[2583]: E0430 03:31:02.466125 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:03.043047 kubelet[2583]: E0430 03:31:03.042988 2583 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 30 03:31:03.467465 kubelet[2583]: E0430 03:31:03.467395 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:04.468295 kubelet[2583]: E0430 03:31:04.468241 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:05.468627 kubelet[2583]: E0430 03:31:05.468569 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:06.469211 kubelet[2583]: E0430 03:31:06.469143 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:07.470799 kubelet[2583]: E0430 03:31:07.470720 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:08.471455 kubelet[2583]: E0430 03:31:08.471364 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:09.472350 kubelet[2583]: E0430 03:31:09.472295 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:09.936355 kubelet[2583]: E0430 03:31:09.936305 2583 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": unexpected EOF" Apr 30 03:31:09.938994 kubelet[2583]: E0430 03:31:09.938356 2583 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection reset by peer" Apr 30 03:31:09.939906 kubelet[2583]: E0430 03:31:09.939767 2583 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" Apr 30 03:31:09.942996 kubelet[2583]: I0430 03:31:09.939806 2583 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 30 03:31:09.953387 kubelet[2583]: E0430 03:31:09.943236 2583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" interval="200ms" Apr 30 03:31:10.154804 kubelet[2583]: E0430 03:31:10.154758 2583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" interval="400ms" Apr 30 03:31:10.472456 kubelet[2583]: E0430 03:31:10.472396 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:10.556489 kubelet[2583]: E0430 03:31:10.556441 2583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" interval="800ms" Apr 30 03:31:10.938692 kubelet[2583]: I0430 03:31:10.937867 2583 status_manager.go:853] "Failed to get status for pod" podUID="a4327373-6a4b-48f5-aa1f-eeb5aea1fec0" pod="calico-apiserver/calico-apiserver-79d7797bfd-bg8xh" err="Get \"https://172.31.23.191:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-79d7797bfd-bg8xh\": dial tcp 172.31.23.191:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Apr 30 03:31:10.939954 kubelet[2583]: I0430 03:31:10.939910 2583 status_manager.go:853] "Failed to get status for pod" podUID="a4327373-6a4b-48f5-aa1f-eeb5aea1fec0" pod="calico-apiserver/calico-apiserver-79d7797bfd-bg8xh" err="Get \"https://172.31.23.191:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-79d7797bfd-bg8xh\": dial tcp 172.31.23.191:6443: connect: connection refused" Apr 30 03:31:11.357694 kubelet[2583]: E0430 03:31:11.357559 2583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" interval="1.6s" Apr 30 03:31:11.473125 kubelet[2583]: I0430 03:31:11.473011 2583 status_manager.go:853] "Failed to get status for pod" podUID="a4327373-6a4b-48f5-aa1f-eeb5aea1fec0" pod="calico-apiserver/calico-apiserver-79d7797bfd-bg8xh" err="Get \"https://172.31.23.191:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-79d7797bfd-bg8xh\": dial tcp 172.31.23.191:6443: connect: connection refused" Apr 30 03:31:11.473125 kubelet[2583]: E0430 03:31:11.473081 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:12.473824 kubelet[2583]: E0430 03:31:12.473771 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:12.959375 kubelet[2583]: E0430 03:31:12.959329 2583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" interval="3.2s" Apr 30 03:31:13.474165 kubelet[2583]: E0430 03:31:13.474131 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:13.775230 kubelet[2583]: E0430 03:31:13.775100 2583 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.153\": Get \"https://172.31.23.191:6443/api/v1/nodes/172.31.17.153?resourceVersion=0&timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" Apr 30 03:31:13.776020 kubelet[2583]: E0430 03:31:13.775919 2583 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.153\": Get \"https://172.31.23.191:6443/api/v1/nodes/172.31.17.153?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" Apr 30 03:31:13.776635 kubelet[2583]: E0430 03:31:13.776603 2583 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.153\": Get \"https://172.31.23.191:6443/api/v1/nodes/172.31.17.153?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" Apr 30 03:31:13.777173 kubelet[2583]: E0430 03:31:13.777139 2583 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.153\": Get \"https://172.31.23.191:6443/api/v1/nodes/172.31.17.153?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" Apr 30 03:31:13.777635 kubelet[2583]: E0430 03:31:13.777605 2583 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.153\": Get \"https://172.31.23.191:6443/api/v1/nodes/172.31.17.153?timeout=10s\": dial tcp 172.31.23.191:6443: connect: connection refused" Apr 30 03:31:13.777635 kubelet[2583]: E0430 03:31:13.777625 2583 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Apr 30 03:31:14.474954 kubelet[2583]: E0430 03:31:14.474748 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:15.475714 kubelet[2583]: E0430 03:31:15.475674 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:16.476257 kubelet[2583]: E0430 03:31:16.476183 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:17.476704 kubelet[2583]: E0430 03:31:17.476668 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:18.477529 kubelet[2583]: E0430 03:31:18.477457 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:19.478001 kubelet[2583]: E0430 03:31:19.477941 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:20.479675 kubelet[2583]: E0430 03:31:20.478737 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:21.341685 kubelet[2583]: E0430 03:31:21.341610 2583 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:21.479728 kubelet[2583]: E0430 03:31:21.479687 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:22.480927 kubelet[2583]: E0430 03:31:22.480822 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:23.481490 kubelet[2583]: E0430 03:31:23.481418 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:24.481808 kubelet[2583]: E0430 03:31:24.481750 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:25.481932 kubelet[2583]: E0430 03:31:25.481880 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:26.164495 kubelet[2583]: E0430 03:31:26.164424 2583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.153?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Apr 30 03:31:26.482612 kubelet[2583]: E0430 03:31:26.482559 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:27.483165 kubelet[2583]: E0430 03:31:27.483123 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:28.483874 kubelet[2583]: E0430 03:31:28.483821 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:29.484471 kubelet[2583]: E0430 03:31:29.484401 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:30.484758 kubelet[2583]: E0430 03:31:30.484716 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 03:31:31.485085 kubelet[2583]: E0430 03:31:31.485034 2583 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"