Jul 7 05:54:21.223664 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 7 05:54:21.223710 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 05:54:21.223735 kernel: KASLR disabled due to lack of seed Jul 7 05:54:21.229678 kernel: efi: EFI v2.7 by EDK II Jul 7 05:54:21.229698 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Jul 7 05:54:21.229714 kernel: ACPI: Early table checksum verification disabled Jul 7 05:54:21.229732 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 7 05:54:21.229785 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 7 05:54:21.229803 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 7 05:54:21.229819 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 7 05:54:21.229844 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 7 05:54:21.229861 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 7 05:54:21.229876 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 7 05:54:21.229892 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 7 05:54:21.229911 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 7 05:54:21.229932 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 7 05:54:21.229949 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 7 05:54:21.229966 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 7 05:54:21.229982 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 7 05:54:21.229999 kernel: printk: bootconsole [uart0] enabled Jul 7 05:54:21.230015 kernel: NUMA: Failed to initialise from firmware Jul 7 05:54:21.230032 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 7 05:54:21.230049 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 7 05:54:21.230065 kernel: Zone ranges: Jul 7 05:54:21.230081 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 7 05:54:21.230097 kernel: DMA32 empty Jul 7 05:54:21.230118 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 7 05:54:21.230135 kernel: Movable zone start for each node Jul 7 05:54:21.230151 kernel: Early memory node ranges Jul 7 05:54:21.230167 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 7 05:54:21.230184 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 7 05:54:21.230200 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 7 05:54:21.230217 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 7 05:54:21.230233 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 7 05:54:21.230249 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 7 05:54:21.230265 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 7 05:54:21.230282 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 7 05:54:21.230298 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 7 05:54:21.230319 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 7 05:54:21.230336 kernel: psci: probing for conduit method from ACPI. Jul 7 05:54:21.230359 kernel: psci: PSCIv1.0 detected in firmware. Jul 7 05:54:21.230377 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 05:54:21.230394 kernel: psci: Trusted OS migration not required Jul 7 05:54:21.230416 kernel: psci: SMC Calling Convention v1.1 Jul 7 05:54:21.230434 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 7 05:54:21.230452 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 05:54:21.230469 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 05:54:21.230487 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 7 05:54:21.230504 kernel: Detected PIPT I-cache on CPU0 Jul 7 05:54:21.230522 kernel: CPU features: detected: GIC system register CPU interface Jul 7 05:54:21.230539 kernel: CPU features: detected: Spectre-v2 Jul 7 05:54:21.230556 kernel: CPU features: detected: Spectre-v3a Jul 7 05:54:21.230573 kernel: CPU features: detected: Spectre-BHB Jul 7 05:54:21.230591 kernel: CPU features: detected: ARM erratum 1742098 Jul 7 05:54:21.230612 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 7 05:54:21.230629 kernel: alternatives: applying boot alternatives Jul 7 05:54:21.230649 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:54:21.230668 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 05:54:21.230685 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 05:54:21.230703 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 05:54:21.230721 kernel: Fallback order for Node 0: 0 Jul 7 05:54:21.232864 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 7 05:54:21.232896 kernel: Policy zone: Normal Jul 7 05:54:21.232916 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 05:54:21.232933 kernel: software IO TLB: area num 2. Jul 7 05:54:21.232958 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 7 05:54:21.232977 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Jul 7 05:54:21.232995 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 05:54:21.233013 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 05:54:21.233031 kernel: rcu: RCU event tracing is enabled. Jul 7 05:54:21.233049 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 05:54:21.233067 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 05:54:21.233084 kernel: Tracing variant of Tasks RCU enabled. Jul 7 05:54:21.233102 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 05:54:21.233119 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 05:54:21.233136 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 05:54:21.233158 kernel: GICv3: 96 SPIs implemented Jul 7 05:54:21.233176 kernel: GICv3: 0 Extended SPIs implemented Jul 7 05:54:21.233193 kernel: Root IRQ handler: gic_handle_irq Jul 7 05:54:21.233210 kernel: GICv3: GICv3 features: 16 PPIs Jul 7 05:54:21.233227 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 7 05:54:21.233245 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 7 05:54:21.233262 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 05:54:21.233280 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jul 7 05:54:21.233297 kernel: GICv3: using LPI property table @0x00000004000d0000 Jul 7 05:54:21.233314 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 7 05:54:21.233332 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jul 7 05:54:21.233349 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 05:54:21.233371 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 7 05:54:21.233389 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 7 05:54:21.233406 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 7 05:54:21.233424 kernel: Console: colour dummy device 80x25 Jul 7 05:54:21.233442 kernel: printk: console [tty1] enabled Jul 7 05:54:21.233460 kernel: ACPI: Core revision 20230628 Jul 7 05:54:21.233478 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 7 05:54:21.233495 kernel: pid_max: default: 32768 minimum: 301 Jul 7 05:54:21.233513 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 05:54:21.233535 kernel: landlock: Up and running. Jul 7 05:54:21.233553 kernel: SELinux: Initializing. Jul 7 05:54:21.233571 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:54:21.233589 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:54:21.233626 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:54:21.233647 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:54:21.233665 kernel: rcu: Hierarchical SRCU implementation. Jul 7 05:54:21.233683 kernel: rcu: Max phase no-delay instances is 400. Jul 7 05:54:21.233701 kernel: Platform MSI: ITS@0x10080000 domain created Jul 7 05:54:21.233724 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 7 05:54:21.233769 kernel: Remapping and enabling EFI services. Jul 7 05:54:21.233791 kernel: smp: Bringing up secondary CPUs ... Jul 7 05:54:21.233808 kernel: Detected PIPT I-cache on CPU1 Jul 7 05:54:21.233827 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 7 05:54:21.233845 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jul 7 05:54:21.233863 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 7 05:54:21.233881 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 05:54:21.233898 kernel: SMP: Total of 2 processors activated. Jul 7 05:54:21.233915 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 05:54:21.233940 kernel: CPU features: detected: 32-bit EL1 Support Jul 7 05:54:21.233958 kernel: CPU features: detected: CRC32 instructions Jul 7 05:54:21.233987 kernel: CPU: All CPU(s) started at EL1 Jul 7 05:54:21.234010 kernel: alternatives: applying system-wide alternatives Jul 7 05:54:21.234028 kernel: devtmpfs: initialized Jul 7 05:54:21.234046 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 05:54:21.234065 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 05:54:21.234084 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 05:54:21.234102 kernel: SMBIOS 3.0.0 present. Jul 7 05:54:21.234125 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 7 05:54:21.234144 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 05:54:21.234162 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 05:54:21.234181 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 05:54:21.234200 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 05:54:21.234218 kernel: audit: initializing netlink subsys (disabled) Jul 7 05:54:21.234237 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Jul 7 05:54:21.234260 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 05:54:21.234279 kernel: cpuidle: using governor menu Jul 7 05:54:21.234297 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 05:54:21.234316 kernel: ASID allocator initialised with 65536 entries Jul 7 05:54:21.234334 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 05:54:21.234353 kernel: Serial: AMBA PL011 UART driver Jul 7 05:54:21.234371 kernel: Modules: 17488 pages in range for non-PLT usage Jul 7 05:54:21.234390 kernel: Modules: 509008 pages in range for PLT usage Jul 7 05:54:21.234408 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 05:54:21.234431 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 05:54:21.234450 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 05:54:21.234468 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 05:54:21.234486 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 05:54:21.234505 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 05:54:21.234523 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 05:54:21.234542 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 05:54:21.234560 kernel: ACPI: Added _OSI(Module Device) Jul 7 05:54:21.234578 kernel: ACPI: Added _OSI(Processor Device) Jul 7 05:54:21.234601 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 05:54:21.234620 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 05:54:21.234638 kernel: ACPI: Interpreter enabled Jul 7 05:54:21.234657 kernel: ACPI: Using GIC for interrupt routing Jul 7 05:54:21.234675 kernel: ACPI: MCFG table detected, 1 entries Jul 7 05:54:21.234694 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 7 05:54:21.240346 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 05:54:21.240578 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 05:54:21.240908 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 05:54:21.242870 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 7 05:54:21.243106 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 7 05:54:21.243134 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 7 05:54:21.243154 kernel: acpiphp: Slot [1] registered Jul 7 05:54:21.243173 kernel: acpiphp: Slot [2] registered Jul 7 05:54:21.243192 kernel: acpiphp: Slot [3] registered Jul 7 05:54:21.243212 kernel: acpiphp: Slot [4] registered Jul 7 05:54:21.243239 kernel: acpiphp: Slot [5] registered Jul 7 05:54:21.243259 kernel: acpiphp: Slot [6] registered Jul 7 05:54:21.243277 kernel: acpiphp: Slot [7] registered Jul 7 05:54:21.243296 kernel: acpiphp: Slot [8] registered Jul 7 05:54:21.243315 kernel: acpiphp: Slot [9] registered Jul 7 05:54:21.243333 kernel: acpiphp: Slot [10] registered Jul 7 05:54:21.243351 kernel: acpiphp: Slot [11] registered Jul 7 05:54:21.243370 kernel: acpiphp: Slot [12] registered Jul 7 05:54:21.243389 kernel: acpiphp: Slot [13] registered Jul 7 05:54:21.243407 kernel: acpiphp: Slot [14] registered Jul 7 05:54:21.243431 kernel: acpiphp: Slot [15] registered Jul 7 05:54:21.243450 kernel: acpiphp: Slot [16] registered Jul 7 05:54:21.243468 kernel: acpiphp: Slot [17] registered Jul 7 05:54:21.243487 kernel: acpiphp: Slot [18] registered Jul 7 05:54:21.243505 kernel: acpiphp: Slot [19] registered Jul 7 05:54:21.243524 kernel: acpiphp: Slot [20] registered Jul 7 05:54:21.243542 kernel: acpiphp: Slot [21] registered Jul 7 05:54:21.243560 kernel: acpiphp: Slot [22] registered Jul 7 05:54:21.243579 kernel: acpiphp: Slot [23] registered Jul 7 05:54:21.243602 kernel: acpiphp: Slot [24] registered Jul 7 05:54:21.243621 kernel: acpiphp: Slot [25] registered Jul 7 05:54:21.243639 kernel: acpiphp: Slot [26] registered Jul 7 05:54:21.243657 kernel: acpiphp: Slot [27] registered Jul 7 05:54:21.243675 kernel: acpiphp: Slot [28] registered Jul 7 05:54:21.243694 kernel: acpiphp: Slot [29] registered Jul 7 05:54:21.243712 kernel: acpiphp: Slot [30] registered Jul 7 05:54:21.243731 kernel: acpiphp: Slot [31] registered Jul 7 05:54:21.243775 kernel: PCI host bridge to bus 0000:00 Jul 7 05:54:21.244222 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 7 05:54:21.244427 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 7 05:54:21.244610 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 7 05:54:21.244898 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 7 05:54:21.245133 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 7 05:54:21.245350 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 7 05:54:21.245554 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 7 05:54:21.245899 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 7 05:54:21.246188 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 7 05:54:21.246400 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 05:54:21.246621 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 7 05:54:21.246869 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 7 05:54:21.247079 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 7 05:54:21.247290 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 7 05:54:21.247568 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 05:54:21.247836 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 7 05:54:21.248045 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 7 05:54:21.248247 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 7 05:54:21.248447 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 7 05:54:21.248652 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 7 05:54:21.248877 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 7 05:54:21.249072 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 7 05:54:21.249256 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 7 05:54:21.249281 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 7 05:54:21.249301 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 7 05:54:21.249320 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 7 05:54:21.249339 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 7 05:54:21.249357 kernel: iommu: Default domain type: Translated Jul 7 05:54:21.249376 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 05:54:21.249400 kernel: efivars: Registered efivars operations Jul 7 05:54:21.249418 kernel: vgaarb: loaded Jul 7 05:54:21.249437 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 05:54:21.249455 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 05:54:21.249474 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 05:54:21.249492 kernel: pnp: PnP ACPI init Jul 7 05:54:21.253985 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 7 05:54:21.254028 kernel: pnp: PnP ACPI: found 1 devices Jul 7 05:54:21.254056 kernel: NET: Registered PF_INET protocol family Jul 7 05:54:21.254076 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 05:54:21.254095 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 05:54:21.254114 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 05:54:21.254133 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 05:54:21.254151 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 05:54:21.254170 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 05:54:21.254188 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:54:21.254207 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:54:21.254230 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 05:54:21.254249 kernel: PCI: CLS 0 bytes, default 64 Jul 7 05:54:21.254267 kernel: kvm [1]: HYP mode not available Jul 7 05:54:21.254286 kernel: Initialise system trusted keyrings Jul 7 05:54:21.254304 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 05:54:21.254323 kernel: Key type asymmetric registered Jul 7 05:54:21.254341 kernel: Asymmetric key parser 'x509' registered Jul 7 05:54:21.254360 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 05:54:21.254378 kernel: io scheduler mq-deadline registered Jul 7 05:54:21.254401 kernel: io scheduler kyber registered Jul 7 05:54:21.254420 kernel: io scheduler bfq registered Jul 7 05:54:21.254632 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 7 05:54:21.254660 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 05:54:21.254679 kernel: ACPI: button: Power Button [PWRB] Jul 7 05:54:21.254698 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 7 05:54:21.254716 kernel: ACPI: button: Sleep Button [SLPB] Jul 7 05:54:21.254735 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 05:54:21.254783 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 7 05:54:21.254993 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 7 05:54:21.255020 kernel: printk: console [ttyS0] disabled Jul 7 05:54:21.255039 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 7 05:54:21.255058 kernel: printk: console [ttyS0] enabled Jul 7 05:54:21.255077 kernel: printk: bootconsole [uart0] disabled Jul 7 05:54:21.255096 kernel: thunder_xcv, ver 1.0 Jul 7 05:54:21.255114 kernel: thunder_bgx, ver 1.0 Jul 7 05:54:21.255133 kernel: nicpf, ver 1.0 Jul 7 05:54:21.255156 kernel: nicvf, ver 1.0 Jul 7 05:54:21.255360 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 05:54:21.255550 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T05:54:20 UTC (1751867660) Jul 7 05:54:21.255576 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 05:54:21.255595 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 7 05:54:21.255614 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 05:54:21.255633 kernel: watchdog: Hard watchdog permanently disabled Jul 7 05:54:21.255651 kernel: NET: Registered PF_INET6 protocol family Jul 7 05:54:21.255675 kernel: Segment Routing with IPv6 Jul 7 05:54:21.255694 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 05:54:21.255713 kernel: NET: Registered PF_PACKET protocol family Jul 7 05:54:21.255731 kernel: Key type dns_resolver registered Jul 7 05:54:21.259801 kernel: registered taskstats version 1 Jul 7 05:54:21.259836 kernel: Loading compiled-in X.509 certificates Jul 7 05:54:21.259856 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 05:54:21.259875 kernel: Key type .fscrypt registered Jul 7 05:54:21.259893 kernel: Key type fscrypt-provisioning registered Jul 7 05:54:21.259920 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 05:54:21.259940 kernel: ima: Allocated hash algorithm: sha1 Jul 7 05:54:21.259958 kernel: ima: No architecture policies found Jul 7 05:54:21.259976 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 05:54:21.259995 kernel: clk: Disabling unused clocks Jul 7 05:54:21.260013 kernel: Freeing unused kernel memory: 39424K Jul 7 05:54:21.260031 kernel: Run /init as init process Jul 7 05:54:21.260050 kernel: with arguments: Jul 7 05:54:21.260068 kernel: /init Jul 7 05:54:21.260086 kernel: with environment: Jul 7 05:54:21.260109 kernel: HOME=/ Jul 7 05:54:21.260127 kernel: TERM=linux Jul 7 05:54:21.260146 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 05:54:21.260169 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:54:21.260193 systemd[1]: Detected virtualization amazon. Jul 7 05:54:21.260214 systemd[1]: Detected architecture arm64. Jul 7 05:54:21.260233 systemd[1]: Running in initrd. Jul 7 05:54:21.260258 systemd[1]: No hostname configured, using default hostname. Jul 7 05:54:21.260278 systemd[1]: Hostname set to . Jul 7 05:54:21.260300 systemd[1]: Initializing machine ID from VM UUID. Jul 7 05:54:21.260320 systemd[1]: Queued start job for default target initrd.target. Jul 7 05:54:21.260340 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:54:21.260360 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:54:21.260382 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 05:54:21.260403 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:54:21.260428 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 05:54:21.260449 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 05:54:21.260473 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 05:54:21.260494 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 05:54:21.260514 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:54:21.260535 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:54:21.260556 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:54:21.260580 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:54:21.260601 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:54:21.260621 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:54:21.260642 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:54:21.260662 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:54:21.260683 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:54:21.260703 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:54:21.260723 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:54:21.260776 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:54:21.260807 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:54:21.260828 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:54:21.260848 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 05:54:21.260869 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:54:21.260889 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 05:54:21.260910 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 05:54:21.260930 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:54:21.260950 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:54:21.260975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:21.261039 systemd-journald[251]: Collecting audit messages is disabled. Jul 7 05:54:21.261083 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 05:54:21.261104 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:54:21.261130 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 05:54:21.261152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:21.261172 systemd-journald[251]: Journal started Jul 7 05:54:21.261214 systemd-journald[251]: Runtime Journal (/run/log/journal/ec221f50e99df6539250c49af362cdbe) is 8.0M, max 75.3M, 67.3M free. Jul 7 05:54:21.244147 systemd-modules-load[252]: Inserted module 'overlay' Jul 7 05:54:21.270850 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 05:54:21.277697 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:54:21.281974 systemd-modules-load[252]: Inserted module 'br_netfilter' Jul 7 05:54:21.282786 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:54:21.282846 kernel: Bridge firewalling registered Jul 7 05:54:21.285313 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:54:21.294454 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:54:21.315541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:54:21.330424 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:54:21.336466 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:54:21.347943 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:54:21.362098 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 05:54:21.380075 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:54:21.386713 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:54:21.395805 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:54:21.412201 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:54:21.422529 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:54:21.433404 dracut-cmdline[279]: dracut-dracut-053 Jul 7 05:54:21.440524 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:54:21.510474 systemd-resolved[290]: Positive Trust Anchors: Jul 7 05:54:21.510501 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:54:21.510563 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:54:21.599829 kernel: SCSI subsystem initialized Jul 7 05:54:21.605867 kernel: Loading iSCSI transport class v2.0-870. Jul 7 05:54:21.618988 kernel: iscsi: registered transport (tcp) Jul 7 05:54:21.641784 kernel: iscsi: registered transport (qla4xxx) Jul 7 05:54:21.641858 kernel: QLogic iSCSI HBA Driver Jul 7 05:54:21.727776 kernel: random: crng init done Jul 7 05:54:21.728143 systemd-resolved[290]: Defaulting to hostname 'linux'. Jul 7 05:54:21.731963 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:54:21.736652 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:54:21.765056 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 05:54:21.779137 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 05:54:21.815769 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 05:54:21.815865 kernel: device-mapper: uevent: version 1.0.3 Jul 7 05:54:21.815893 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 05:54:21.884793 kernel: raid6: neonx8 gen() 6625 MB/s Jul 7 05:54:21.901792 kernel: raid6: neonx4 gen() 6459 MB/s Jul 7 05:54:21.918785 kernel: raid6: neonx2 gen() 5408 MB/s Jul 7 05:54:21.935792 kernel: raid6: neonx1 gen() 3934 MB/s Jul 7 05:54:21.952787 kernel: raid6: int64x8 gen() 3795 MB/s Jul 7 05:54:21.969789 kernel: raid6: int64x4 gen() 3713 MB/s Jul 7 05:54:21.986786 kernel: raid6: int64x2 gen() 3585 MB/s Jul 7 05:54:22.004789 kernel: raid6: int64x1 gen() 2765 MB/s Jul 7 05:54:22.004845 kernel: raid6: using algorithm neonx8 gen() 6625 MB/s Jul 7 05:54:22.023784 kernel: raid6: .... xor() 4927 MB/s, rmw enabled Jul 7 05:54:22.023852 kernel: raid6: using neon recovery algorithm Jul 7 05:54:22.031783 kernel: xor: measuring software checksum speed Jul 7 05:54:22.034299 kernel: 8regs : 10248 MB/sec Jul 7 05:54:22.034336 kernel: 32regs : 11910 MB/sec Jul 7 05:54:22.035608 kernel: arm64_neon : 9565 MB/sec Jul 7 05:54:22.035641 kernel: xor: using function: 32regs (11910 MB/sec) Jul 7 05:54:22.120801 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 05:54:22.141097 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:54:22.152139 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:54:22.191713 systemd-udevd[469]: Using default interface naming scheme 'v255'. Jul 7 05:54:22.200773 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:54:22.215388 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 05:54:22.247561 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jul 7 05:54:22.306540 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:54:22.317332 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:54:22.434785 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:54:22.452203 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 05:54:22.506199 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 05:54:22.515969 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:54:22.521467 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:54:22.524108 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:54:22.534431 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 05:54:22.571558 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:54:22.641438 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:54:22.641913 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:54:22.654905 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 7 05:54:22.654948 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 7 05:54:22.657066 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:54:22.665898 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 7 05:54:22.671240 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 7 05:54:22.662943 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:54:22.665383 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:22.680202 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:8e:87:09:51:53 Jul 7 05:54:22.668446 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:22.683542 (udev-worker)[520]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:54:22.695352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:22.719838 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 7 05:54:22.719917 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 7 05:54:22.734793 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 7 05:54:22.735150 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:22.748337 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 05:54:22.748414 kernel: GPT:9289727 != 16777215 Jul 7 05:54:22.748440 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 05:54:22.749314 kernel: GPT:9289727 != 16777215 Jul 7 05:54:22.750526 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 05:54:22.751520 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:54:22.752569 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:54:22.788836 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:54:22.855268 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (527) Jul 7 05:54:22.863850 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (534) Jul 7 05:54:22.918486 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 7 05:54:22.985263 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 7 05:54:22.992643 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 7 05:54:23.019550 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 7 05:54:23.036452 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 05:54:23.053117 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 05:54:23.067731 disk-uuid[662]: Primary Header is updated. Jul 7 05:54:23.067731 disk-uuid[662]: Secondary Entries is updated. Jul 7 05:54:23.067731 disk-uuid[662]: Secondary Header is updated. Jul 7 05:54:23.078801 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:54:23.094849 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:54:24.111206 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:54:24.112302 disk-uuid[663]: The operation has completed successfully. Jul 7 05:54:24.292590 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 05:54:24.293278 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 05:54:24.360050 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 05:54:24.370345 sh[1007]: Success Jul 7 05:54:24.395799 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 05:54:24.496131 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 05:54:24.510089 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 05:54:24.520279 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 05:54:24.557228 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 05:54:24.557304 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:54:24.559269 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 05:54:24.559312 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 05:54:24.560620 kernel: BTRFS info (device dm-0): using free space tree Jul 7 05:54:24.676795 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 7 05:54:24.703645 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 05:54:24.704256 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 05:54:24.716099 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 05:54:24.721207 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 05:54:24.770488 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:24.770572 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:54:24.770612 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 7 05:54:24.787802 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 7 05:54:24.807663 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 05:54:24.811305 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:24.822644 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 05:54:24.835171 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 05:54:24.934852 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:54:24.949241 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:54:25.003182 systemd-networkd[1199]: lo: Link UP Jul 7 05:54:25.003205 systemd-networkd[1199]: lo: Gained carrier Jul 7 05:54:25.009404 systemd-networkd[1199]: Enumeration completed Jul 7 05:54:25.009944 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:54:25.015253 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:54:25.015262 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:54:25.026928 systemd[1]: Reached target network.target - Network. Jul 7 05:54:25.027492 systemd-networkd[1199]: eth0: Link UP Jul 7 05:54:25.027500 systemd-networkd[1199]: eth0: Gained carrier Jul 7 05:54:25.027518 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:54:25.045827 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.16.202/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 05:54:25.226532 ignition[1130]: Ignition 2.19.0 Jul 7 05:54:25.227107 ignition[1130]: Stage: fetch-offline Jul 7 05:54:25.228986 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:25.229011 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:54:25.230583 ignition[1130]: Ignition finished successfully Jul 7 05:54:25.239945 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:54:25.251113 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 05:54:25.286717 ignition[1210]: Ignition 2.19.0 Jul 7 05:54:25.286795 ignition[1210]: Stage: fetch Jul 7 05:54:25.288540 ignition[1210]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:25.288567 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:54:25.289343 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:54:25.306703 ignition[1210]: PUT result: OK Jul 7 05:54:25.309971 ignition[1210]: parsed url from cmdline: "" Jul 7 05:54:25.309988 ignition[1210]: no config URL provided Jul 7 05:54:25.310006 ignition[1210]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:54:25.310031 ignition[1210]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:54:25.310065 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:54:25.314301 ignition[1210]: PUT result: OK Jul 7 05:54:25.314379 ignition[1210]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 7 05:54:25.319276 ignition[1210]: GET result: OK Jul 7 05:54:25.319361 ignition[1210]: parsing config with SHA512: 3924344f7474241f664e9269bd64dfa77b22bff71d190a3993645b7e83334e406f3dd76b5edc5b6367348595f4f53b35b4997aa46662dffb16682abe5c5cc294 Jul 7 05:54:25.329878 unknown[1210]: fetched base config from "system" Jul 7 05:54:25.331916 unknown[1210]: fetched base config from "system" Jul 7 05:54:25.332545 ignition[1210]: fetch: fetch complete Jul 7 05:54:25.331935 unknown[1210]: fetched user config from "aws" Jul 7 05:54:25.332560 ignition[1210]: fetch: fetch passed Jul 7 05:54:25.332682 ignition[1210]: Ignition finished successfully Jul 7 05:54:25.343104 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 05:54:25.358166 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 05:54:25.387675 ignition[1216]: Ignition 2.19.0 Jul 7 05:54:25.387697 ignition[1216]: Stage: kargs Jul 7 05:54:25.388412 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:25.388438 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:54:25.388595 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:54:25.396608 ignition[1216]: PUT result: OK Jul 7 05:54:25.403271 ignition[1216]: kargs: kargs passed Jul 7 05:54:25.403410 ignition[1216]: Ignition finished successfully Jul 7 05:54:25.407148 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 05:54:25.427251 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 05:54:25.458618 ignition[1222]: Ignition 2.19.0 Jul 7 05:54:25.458655 ignition[1222]: Stage: disks Jul 7 05:54:25.460728 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:25.460799 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:54:25.462153 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:54:25.467852 ignition[1222]: PUT result: OK Jul 7 05:54:25.475877 ignition[1222]: disks: disks passed Jul 7 05:54:25.476026 ignition[1222]: Ignition finished successfully Jul 7 05:54:25.480933 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 05:54:25.486793 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 05:54:25.489539 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:54:25.494895 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:54:25.497390 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:54:25.499970 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:54:25.520136 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 05:54:25.565553 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 05:54:25.571806 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 05:54:25.584106 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 05:54:25.671802 kernel: EXT4-fs (nvme0n1p9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 05:54:25.672517 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 05:54:25.676875 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 05:54:25.696983 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:54:25.701007 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 05:54:25.707361 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 05:54:25.707485 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 05:54:25.707540 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:54:25.732775 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1250) Jul 7 05:54:25.739331 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:25.739424 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:54:25.739472 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 7 05:54:25.746075 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 05:54:25.755799 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 7 05:54:25.759174 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 05:54:25.768435 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:54:26.192180 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 05:54:26.203006 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Jul 7 05:54:26.213537 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 05:54:26.223797 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 05:54:26.514586 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 05:54:26.529060 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 05:54:26.536052 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 05:54:26.555541 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 05:54:26.559969 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:26.572088 systemd-networkd[1199]: eth0: Gained IPv6LL Jul 7 05:54:26.598733 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 05:54:26.608015 ignition[1363]: INFO : Ignition 2.19.0 Jul 7 05:54:26.608015 ignition[1363]: INFO : Stage: mount Jul 7 05:54:26.612264 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:26.612264 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:54:26.612264 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:54:26.620153 ignition[1363]: INFO : PUT result: OK Jul 7 05:54:26.625130 ignition[1363]: INFO : mount: mount passed Jul 7 05:54:26.627019 ignition[1363]: INFO : Ignition finished successfully Jul 7 05:54:26.628471 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 05:54:26.642916 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 05:54:26.682137 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:54:26.705780 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1374) Jul 7 05:54:26.710294 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:54:26.710361 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:54:26.710388 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 7 05:54:26.716782 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 7 05:54:26.721004 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:54:26.772819 ignition[1391]: INFO : Ignition 2.19.0 Jul 7 05:54:26.772819 ignition[1391]: INFO : Stage: files Jul 7 05:54:26.772819 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:26.772819 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:54:26.772819 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:54:26.785480 ignition[1391]: INFO : PUT result: OK Jul 7 05:54:26.789416 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Jul 7 05:54:26.793536 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 05:54:26.796724 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 05:54:26.820468 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 05:54:26.823948 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 05:54:26.827254 unknown[1391]: wrote ssh authorized keys file for user: core Jul 7 05:54:26.829905 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 05:54:26.839428 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 05:54:26.843451 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 05:54:26.843451 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 05:54:26.843451 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 05:54:26.843451 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:54:26.843451 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:54:26.843451 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:54:26.843451 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:54:26.843451 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:54:26.843451 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 7 05:54:27.455724 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jul 7 05:54:27.878665 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:54:27.878665 ignition[1391]: INFO : files: op(8): [started] processing unit "containerd.service" Jul 7 05:54:27.887902 ignition[1391]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 05:54:27.887902 ignition[1391]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 05:54:27.887902 ignition[1391]: INFO : files: op(8): [finished] processing unit "containerd.service" Jul 7 05:54:27.887902 ignition[1391]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:54:27.887902 ignition[1391]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:54:27.887902 ignition[1391]: INFO : files: files passed Jul 7 05:54:27.887902 ignition[1391]: INFO : Ignition finished successfully Jul 7 05:54:27.897836 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 05:54:27.923233 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 05:54:27.942612 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 05:54:27.970504 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 05:54:27.970796 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 05:54:27.990385 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:54:27.990385 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:54:28.001927 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:54:28.006178 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:54:28.010808 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 05:54:28.025187 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 05:54:28.085800 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 05:54:28.086270 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 05:54:28.096163 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 05:54:28.109141 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 05:54:28.114309 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 05:54:28.131296 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 05:54:28.166938 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:54:28.181336 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 05:54:28.211483 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:54:28.215420 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:54:28.223981 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 05:54:28.230393 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 05:54:28.230686 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:54:28.234252 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 05:54:28.236794 systemd[1]: Stopped target basic.target - Basic System. Jul 7 05:54:28.240939 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 05:54:28.255733 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:54:28.258827 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 05:54:28.264237 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 05:54:28.267091 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:54:28.270962 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 05:54:28.283345 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 05:54:28.283671 systemd[1]: Stopped target swap.target - Swaps. Jul 7 05:54:28.283932 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 05:54:28.284206 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:54:28.298634 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:54:28.302492 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:54:28.307736 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 05:54:28.315240 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:54:28.318524 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 05:54:28.318953 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 05:54:28.338327 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 05:54:28.339099 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:54:28.350338 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 05:54:28.351766 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 05:54:28.366106 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 05:54:28.374471 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 05:54:28.379086 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 05:54:28.386971 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:54:28.391137 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 05:54:28.391405 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:54:28.415593 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 05:54:28.415957 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 05:54:28.433001 ignition[1444]: INFO : Ignition 2.19.0 Jul 7 05:54:28.433001 ignition[1444]: INFO : Stage: umount Jul 7 05:54:28.433001 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:54:28.448201 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:54:28.448201 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:54:28.448201 ignition[1444]: INFO : PUT result: OK Jul 7 05:54:28.448201 ignition[1444]: INFO : umount: umount passed Jul 7 05:54:28.448201 ignition[1444]: INFO : Ignition finished successfully Jul 7 05:54:28.441562 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 05:54:28.442025 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 05:54:28.449481 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 05:54:28.455864 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 05:54:28.458941 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 05:54:28.459034 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 05:54:28.461370 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 05:54:28.461727 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 05:54:28.472602 systemd[1]: Stopped target network.target - Network. Jul 7 05:54:28.474546 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 05:54:28.474676 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:54:28.479304 systemd[1]: Stopped target paths.target - Path Units. Jul 7 05:54:28.481223 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 05:54:28.481337 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:54:28.485993 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 05:54:28.508652 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 05:54:28.513763 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 05:54:28.513900 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:54:28.523991 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 05:54:28.524074 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:54:28.526356 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 05:54:28.526449 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 05:54:28.528800 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 05:54:28.528877 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 05:54:28.531675 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 05:54:28.543798 systemd-networkd[1199]: eth0: DHCPv6 lease lost Jul 7 05:54:28.550965 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 05:54:28.557372 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 05:54:28.560181 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 05:54:28.565677 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 05:54:28.570709 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 05:54:28.572871 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 05:54:28.578147 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 05:54:28.578254 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:54:28.585930 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 05:54:28.586043 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 05:54:28.601007 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 05:54:28.603307 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 05:54:28.603440 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:54:28.613405 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:54:28.617052 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 05:54:28.618213 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 05:54:28.638184 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:54:28.638356 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:54:28.641253 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 05:54:28.641347 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 05:54:28.647418 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 05:54:28.649653 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:54:28.653333 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 05:54:28.653760 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:54:28.669816 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 05:54:28.670362 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 05:54:28.680440 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 05:54:28.680527 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:54:28.682915 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 05:54:28.683009 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:54:28.685797 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 05:54:28.685881 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 05:54:28.688713 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:54:28.688814 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:54:28.706141 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 05:54:28.719146 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 05:54:28.719267 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:54:28.722050 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 05:54:28.722143 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:54:28.730961 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 05:54:28.731046 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:54:28.733669 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:54:28.733773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:28.752353 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 05:54:28.752795 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 05:54:28.772322 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 05:54:28.774019 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 05:54:28.780369 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 05:54:28.792134 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 05:54:28.833016 systemd[1]: Switching root. Jul 7 05:54:28.870508 systemd-journald[251]: Journal stopped Jul 7 05:54:31.406958 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jul 7 05:54:31.407096 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 05:54:31.407137 kernel: SELinux: policy capability open_perms=1 Jul 7 05:54:31.407170 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 05:54:31.407201 kernel: SELinux: policy capability always_check_network=0 Jul 7 05:54:31.407236 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 05:54:31.407269 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 05:54:31.407301 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 05:54:31.407331 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 05:54:31.407366 kernel: audit: type=1403 audit(1751867669.509:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 05:54:31.407409 systemd[1]: Successfully loaded SELinux policy in 84.751ms. Jul 7 05:54:31.407459 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.914ms. Jul 7 05:54:31.407493 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:54:31.407527 systemd[1]: Detected virtualization amazon. Jul 7 05:54:31.407559 systemd[1]: Detected architecture arm64. Jul 7 05:54:31.407590 systemd[1]: Detected first boot. Jul 7 05:54:31.407622 systemd[1]: Initializing machine ID from VM UUID. Jul 7 05:54:31.407663 zram_generator::config[1504]: No configuration found. Jul 7 05:54:31.407696 systemd[1]: Populated /etc with preset unit settings. Jul 7 05:54:31.407729 systemd[1]: Queued start job for default target multi-user.target. Jul 7 05:54:31.407784 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 05:54:31.407818 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 05:54:31.407852 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 05:54:31.407882 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 05:54:31.407914 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 05:54:31.407948 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 05:54:31.407979 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 05:54:31.408013 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 05:54:31.408051 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 05:54:31.408083 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:54:31.408116 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:54:31.408148 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 05:54:31.408181 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 05:54:31.408211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 05:54:31.408242 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:54:31.408273 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 05:54:31.408303 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:54:31.408337 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 05:54:31.408367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:54:31.408398 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:54:31.408428 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:54:31.408459 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:54:31.408489 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 05:54:31.408518 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 05:54:31.408549 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:54:31.408585 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:54:31.408617 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:54:31.408647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:54:31.408679 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:54:31.408710 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 05:54:31.408768 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 05:54:31.408811 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 05:54:31.408842 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 05:54:31.408873 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 05:54:31.408910 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 05:54:31.408940 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 05:54:31.408972 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 05:54:31.409002 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:54:31.409031 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:54:31.409061 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 05:54:31.409092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:54:31.409125 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:54:31.409155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:54:31.409190 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 05:54:31.409222 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:54:31.409252 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 05:54:31.409282 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 7 05:54:31.409324 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 7 05:54:31.409354 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:54:31.409387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:54:31.409417 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 05:54:31.409450 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 05:54:31.409482 kernel: fuse: init (API version 7.39) Jul 7 05:54:31.409514 kernel: loop: module loaded Jul 7 05:54:31.409542 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:54:31.409598 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 05:54:31.409631 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 05:54:31.409661 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 05:54:31.409760 systemd-journald[1601]: Collecting audit messages is disabled. Jul 7 05:54:31.409823 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 05:54:31.409867 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 05:54:31.409902 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 05:54:31.409934 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:54:31.409964 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 05:54:31.409995 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 05:54:31.410023 kernel: ACPI: bus type drm_connector registered Jul 7 05:54:31.410053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:54:31.410082 systemd-journald[1601]: Journal started Jul 7 05:54:31.410133 systemd-journald[1601]: Runtime Journal (/run/log/journal/ec221f50e99df6539250c49af362cdbe) is 8.0M, max 75.3M, 67.3M free. Jul 7 05:54:31.412833 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:54:31.422358 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:54:31.430661 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:54:31.435111 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:54:31.438632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:54:31.440512 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:54:31.443881 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 05:54:31.444214 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 05:54:31.447220 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:54:31.450450 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:54:31.454593 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:54:31.458582 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 05:54:31.462355 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 05:54:31.490704 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 05:54:31.501975 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 05:54:31.517042 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 05:54:31.530069 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 05:54:31.532620 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 05:54:31.549047 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 05:54:31.559092 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 05:54:31.561824 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:54:31.581901 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 05:54:31.587980 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:54:31.604954 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:54:31.612966 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:54:31.629019 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 05:54:31.633213 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 05:54:31.661701 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 05:54:31.665274 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 05:54:31.680617 systemd-journald[1601]: Time spent on flushing to /var/log/journal/ec221f50e99df6539250c49af362cdbe is 95.276ms for 881 entries. Jul 7 05:54:31.680617 systemd-journald[1601]: System Journal (/var/log/journal/ec221f50e99df6539250c49af362cdbe) is 8.0M, max 195.6M, 187.6M free. Jul 7 05:54:31.788471 systemd-journald[1601]: Received client request to flush runtime journal. Jul 7 05:54:31.735627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:54:31.750265 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:54:31.768043 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 05:54:31.796638 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 05:54:31.814679 systemd-tmpfiles[1656]: ACLs are not supported, ignoring. Jul 7 05:54:31.814719 systemd-tmpfiles[1656]: ACLs are not supported, ignoring. Jul 7 05:54:31.827148 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:54:31.847408 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 05:54:31.851121 udevadm[1668]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 05:54:31.937138 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 05:54:31.949132 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:54:31.997901 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Jul 7 05:54:31.997946 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Jul 7 05:54:32.007933 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:54:32.659001 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 05:54:32.673132 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:54:32.732543 systemd-udevd[1684]: Using default interface naming scheme 'v255'. Jul 7 05:54:32.785033 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:54:32.795039 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:54:32.845977 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 05:54:32.947001 (udev-worker)[1700]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:54:32.951266 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 7 05:54:33.006892 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 05:54:33.196447 systemd-networkd[1688]: lo: Link UP Jul 7 05:54:33.196470 systemd-networkd[1688]: lo: Gained carrier Jul 7 05:54:33.200357 systemd-networkd[1688]: Enumeration completed Jul 7 05:54:33.200617 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:54:33.205290 systemd-networkd[1688]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:54:33.205318 systemd-networkd[1688]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:54:33.213183 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 05:54:33.220215 systemd-networkd[1688]: eth0: Link UP Jul 7 05:54:33.223044 systemd-networkd[1688]: eth0: Gained carrier Jul 7 05:54:33.223095 systemd-networkd[1688]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:54:33.233029 systemd-networkd[1688]: eth0: DHCPv4 address 172.31.16.202/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 05:54:33.266814 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1697) Jul 7 05:54:33.317769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:54:33.512936 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 05:54:33.516816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:54:33.553130 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 05:54:33.569084 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 05:54:33.604573 lvm[1813]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:54:33.644495 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 05:54:33.647845 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:54:33.662271 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 05:54:33.673633 lvm[1816]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:54:33.712461 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 05:54:33.715653 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:54:33.718901 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 05:54:33.718965 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:54:33.721722 systemd[1]: Reached target machines.target - Containers. Jul 7 05:54:33.725937 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 05:54:33.739038 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 05:54:33.748091 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 05:54:33.750646 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:54:33.762088 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 05:54:33.770070 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 05:54:33.786010 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 05:54:33.792236 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 05:54:33.820015 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 05:54:33.831930 kernel: loop0: detected capacity change from 0 to 203944 Jul 7 05:54:33.837621 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 05:54:33.839132 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 05:54:33.875791 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 05:54:33.915166 kernel: loop1: detected capacity change from 0 to 114432 Jul 7 05:54:34.026784 kernel: loop2: detected capacity change from 0 to 52536 Jul 7 05:54:34.087803 kernel: loop3: detected capacity change from 0 to 114328 Jul 7 05:54:34.193809 kernel: loop4: detected capacity change from 0 to 203944 Jul 7 05:54:34.221794 kernel: loop5: detected capacity change from 0 to 114432 Jul 7 05:54:34.235775 kernel: loop6: detected capacity change from 0 to 52536 Jul 7 05:54:34.257790 kernel: loop7: detected capacity change from 0 to 114328 Jul 7 05:54:34.266938 (sd-merge)[1837]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 7 05:54:34.267960 (sd-merge)[1837]: Merged extensions into '/usr'. Jul 7 05:54:34.297542 systemd[1]: Reloading requested from client PID 1824 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 05:54:34.297841 systemd[1]: Reloading... Jul 7 05:54:34.432847 zram_generator::config[1866]: No configuration found. Jul 7 05:54:34.728943 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:54:34.891984 systemd-networkd[1688]: eth0: Gained IPv6LL Jul 7 05:54:34.897944 systemd[1]: Reloading finished in 599 ms. Jul 7 05:54:34.935267 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 05:54:34.944159 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 05:54:34.960116 systemd[1]: Starting ensure-sysext.service... Jul 7 05:54:34.966099 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:54:35.005036 systemd[1]: Reloading requested from client PID 1924 ('systemctl') (unit ensure-sysext.service)... Jul 7 05:54:35.005081 systemd[1]: Reloading... Jul 7 05:54:35.055357 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 05:54:35.058280 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 05:54:35.064417 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 05:54:35.065131 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Jul 7 05:54:35.065277 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Jul 7 05:54:35.069789 ldconfig[1820]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 05:54:35.075796 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:54:35.075843 systemd-tmpfiles[1925]: Skipping /boot Jul 7 05:54:35.109423 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:54:35.109459 systemd-tmpfiles[1925]: Skipping /boot Jul 7 05:54:35.190877 zram_generator::config[1958]: No configuration found. Jul 7 05:54:35.477838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:54:35.644973 systemd[1]: Reloading finished in 639 ms. Jul 7 05:54:35.675909 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 05:54:35.685028 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:54:35.707120 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:54:35.720077 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 05:54:35.734911 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 05:54:35.747238 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:54:35.760190 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 05:54:35.795150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:54:35.812866 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:54:35.828337 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:54:35.842311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:54:35.846124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:54:35.848607 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 05:54:35.868489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:54:35.868991 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:54:35.897351 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:54:35.915413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:54:35.920320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:54:35.931630 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 05:54:35.943057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:54:35.944209 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:54:35.956142 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:54:35.956586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:54:35.962504 augenrules[2049]: No rules Jul 7 05:54:35.966111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:54:35.974120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:54:35.983147 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:54:36.001268 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 05:54:36.029229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:54:36.042209 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:54:36.052261 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:54:36.061321 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:54:36.092138 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:54:36.095220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:54:36.095698 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 05:54:36.117880 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 05:54:36.125249 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 05:54:36.131134 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:54:36.137220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:54:36.145618 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:54:36.146087 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:54:36.155279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:54:36.159800 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:54:36.163773 systemd[1]: Finished ensure-sysext.service. Jul 7 05:54:36.168382 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:54:36.170138 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:54:36.198579 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:54:36.198807 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:54:36.198874 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 05:54:36.204810 systemd-resolved[2020]: Positive Trust Anchors: Jul 7 05:54:36.204848 systemd-resolved[2020]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:54:36.204915 systemd-resolved[2020]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:54:36.219409 systemd-resolved[2020]: Defaulting to hostname 'linux'. Jul 7 05:54:36.223209 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:54:36.226247 systemd[1]: Reached target network.target - Network. Jul 7 05:54:36.228297 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 05:54:36.230871 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:54:36.233927 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:54:36.236806 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 05:54:36.239869 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 05:54:36.243243 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 05:54:36.246162 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 05:54:36.249067 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 05:54:36.251920 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 05:54:36.251985 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:54:36.254165 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:54:36.257434 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 05:54:36.263342 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 05:54:36.268099 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 05:54:36.282043 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 05:54:36.284985 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:54:36.287985 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:54:36.291303 systemd[1]: System is tainted: cgroupsv1 Jul 7 05:54:36.291644 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:54:36.291700 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:54:36.295515 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 05:54:36.308219 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 05:54:36.314037 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 05:54:36.323970 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 05:54:36.338023 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 05:54:36.340388 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 05:54:36.362948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:54:36.372230 jq[2087]: false Jul 7 05:54:36.381170 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 05:54:36.410235 systemd[1]: Started ntpd.service - Network Time Service. Jul 7 05:54:36.430041 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 05:54:36.456921 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 7 05:54:36.474195 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 05:54:36.496592 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 05:54:36.510696 dbus-daemon[2085]: [system] SELinux support is enabled Jul 7 05:54:36.514119 extend-filesystems[2088]: Found loop4 Jul 7 05:54:36.520683 extend-filesystems[2088]: Found loop5 Jul 7 05:54:36.520683 extend-filesystems[2088]: Found loop6 Jul 7 05:54:36.520683 extend-filesystems[2088]: Found loop7 Jul 7 05:54:36.520683 extend-filesystems[2088]: Found nvme0n1 Jul 7 05:54:36.520683 extend-filesystems[2088]: Found nvme0n1p1 Jul 7 05:54:36.520683 extend-filesystems[2088]: Found nvme0n1p2 Jul 7 05:54:36.520683 extend-filesystems[2088]: Found nvme0n1p3 Jul 7 05:54:36.520683 extend-filesystems[2088]: Found usr Jul 7 05:54:36.520683 extend-filesystems[2088]: Found nvme0n1p4 Jul 7 05:54:36.528504 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 05:54:36.562537 extend-filesystems[2088]: Found nvme0n1p6 Jul 7 05:54:36.562537 extend-filesystems[2088]: Found nvme0n1p7 Jul 7 05:54:36.562537 extend-filesystems[2088]: Found nvme0n1p9 Jul 7 05:54:36.562537 extend-filesystems[2088]: Checking size of /dev/nvme0n1p9 Jul 7 05:54:36.560134 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 05:54:36.551572 dbus-daemon[2085]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1688 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 05:54:36.585132 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 05:54:36.597311 ntpd[2095]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:48:27 UTC 2025 (1): Starting Jul 7 05:54:36.597800 ntpd[2095]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 05:54:36.601109 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:48:27 UTC 2025 (1): Starting Jul 7 05:54:36.601109 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 05:54:36.601109 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: ---------------------------------------------------- Jul 7 05:54:36.601109 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: ntp-4 is maintained by Network Time Foundation, Jul 7 05:54:36.601109 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 05:54:36.601109 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: corporation. Support and training for ntp-4 are Jul 7 05:54:36.601109 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: available at https://www.nwtime.org/support Jul 7 05:54:36.601109 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: ---------------------------------------------------- Jul 7 05:54:36.597823 ntpd[2095]: ---------------------------------------------------- Jul 7 05:54:36.632666 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: proto: precision = 0.108 usec (-23) Jul 7 05:54:36.632666 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: basedate set to 2025-06-24 Jul 7 05:54:36.632666 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: gps base set to 2025-06-29 (week 2373) Jul 7 05:54:36.632666 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 05:54:36.632666 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 05:54:36.632666 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 05:54:36.632666 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: Listen normally on 3 eth0 172.31.16.202:123 Jul 7 05:54:36.632666 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: Listen normally on 4 lo [::1]:123 Jul 7 05:54:36.632666 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: Listen normally on 5 eth0 [fe80::48e:87ff:fe09:5153%2]:123 Jul 7 05:54:36.632666 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: Listening on routing socket on fd #22 for interface updates Jul 7 05:54:36.624655 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 05:54:36.597843 ntpd[2095]: ntp-4 is maintained by Network Time Foundation, Jul 7 05:54:36.633317 coreos-metadata[2084]: Jul 07 05:54:36.629 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 05:54:36.597861 ntpd[2095]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 05:54:36.656823 coreos-metadata[2084]: Jul 07 05:54:36.634 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 7 05:54:36.656823 coreos-metadata[2084]: Jul 07 05:54:36.636 INFO Fetch successful Jul 7 05:54:36.656823 coreos-metadata[2084]: Jul 07 05:54:36.636 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 7 05:54:36.656823 coreos-metadata[2084]: Jul 07 05:54:36.637 INFO Fetch successful Jul 7 05:54:36.656823 coreos-metadata[2084]: Jul 07 05:54:36.637 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 7 05:54:36.656823 coreos-metadata[2084]: Jul 07 05:54:36.650 INFO Fetch successful Jul 7 05:54:36.656823 coreos-metadata[2084]: Jul 07 05:54:36.650 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 7 05:54:36.656823 coreos-metadata[2084]: Jul 07 05:54:36.652 INFO Fetch successful Jul 7 05:54:36.656823 coreos-metadata[2084]: Jul 07 05:54:36.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 7 05:54:36.656823 coreos-metadata[2084]: Jul 07 05:54:36.655 INFO Fetch failed with 404: resource not found Jul 7 05:54:36.656823 coreos-metadata[2084]: Jul 07 05:54:36.656 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 7 05:54:36.639276 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 05:54:36.597881 ntpd[2095]: corporation. Support and training for ntp-4 are Jul 7 05:54:36.654703 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 05:54:36.597900 ntpd[2095]: available at https://www.nwtime.org/support Jul 7 05:54:36.667519 coreos-metadata[2084]: Jul 07 05:54:36.657 INFO Fetch successful Jul 7 05:54:36.667519 coreos-metadata[2084]: Jul 07 05:54:36.657 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 7 05:54:36.667519 coreos-metadata[2084]: Jul 07 05:54:36.661 INFO Fetch successful Jul 7 05:54:36.667519 coreos-metadata[2084]: Jul 07 05:54:36.661 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 7 05:54:36.655311 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 05:54:36.597918 ntpd[2095]: ---------------------------------------------------- Jul 7 05:54:36.605446 ntpd[2095]: proto: precision = 0.108 usec (-23) Jul 7 05:54:36.616425 ntpd[2095]: basedate set to 2025-06-24 Jul 7 05:54:36.616464 ntpd[2095]: gps base set to 2025-06-29 (week 2373) Jul 7 05:54:36.624073 ntpd[2095]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 05:54:36.624171 ntpd[2095]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 05:54:36.627597 ntpd[2095]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 05:54:36.677319 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 05:54:36.682072 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 05:54:36.682072 ntpd[2095]: 7 Jul 05:54:36 ntpd[2095]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 05:54:36.682217 coreos-metadata[2084]: Jul 07 05:54:36.672 INFO Fetch successful Jul 7 05:54:36.682217 coreos-metadata[2084]: Jul 07 05:54:36.672 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 7 05:54:36.682217 coreos-metadata[2084]: Jul 07 05:54:36.678 INFO Fetch successful Jul 7 05:54:36.682217 coreos-metadata[2084]: Jul 07 05:54:36.680 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 7 05:54:36.627701 ntpd[2095]: Listen normally on 3 eth0 172.31.16.202:123 Jul 7 05:54:36.682164 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 05:54:36.627831 ntpd[2095]: Listen normally on 4 lo [::1]:123 Jul 7 05:54:36.628000 ntpd[2095]: Listen normally on 5 eth0 [fe80::48e:87ff:fe09:5153%2]:123 Jul 7 05:54:36.628078 ntpd[2095]: Listening on routing socket on fd #22 for interface updates Jul 7 05:54:36.670487 ntpd[2095]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 05:54:36.670545 ntpd[2095]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 05:54:36.699778 coreos-metadata[2084]: Jul 07 05:54:36.690 INFO Fetch successful Jul 7 05:54:36.697919 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 05:54:36.710381 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 05:54:36.729292 extend-filesystems[2088]: Resized partition /dev/nvme0n1p9 Jul 7 05:54:36.737412 jq[2117]: true Jul 7 05:54:36.766852 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 7 05:54:36.767022 extend-filesystems[2130]: resize2fs 1.47.1 (20-May-2024) Jul 7 05:54:36.859332 update_engine[2113]: I20250707 05:54:36.858870 2113 main.cc:92] Flatcar Update Engine starting Jul 7 05:54:36.860452 (ntainerd)[2140]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 05:54:36.861983 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 05:54:36.891474 update_engine[2113]: I20250707 05:54:36.890949 2113 update_check_scheduler.cc:74] Next update check in 3m56s Jul 7 05:54:36.937181 dbus-daemon[2085]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 05:54:36.957943 systemd[1]: Started update-engine.service - Update Engine. Jul 7 05:54:36.972601 jq[2134]: true Jul 7 05:54:36.973131 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 7 05:54:36.979994 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 05:54:36.985104 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 05:54:36.985162 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 05:54:37.009836 extend-filesystems[2130]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 7 05:54:37.009836 extend-filesystems[2130]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 05:54:37.009836 extend-filesystems[2130]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 7 05:54:37.020829 extend-filesystems[2088]: Resized filesystem in /dev/nvme0n1p9 Jul 7 05:54:37.025951 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 05:54:37.033086 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 05:54:37.033161 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 05:54:37.039995 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 05:54:37.063075 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 05:54:37.068734 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 05:54:37.074666 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 05:54:37.115810 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 05:54:37.161605 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 05:54:37.179668 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 7 05:54:37.187078 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 7 05:54:37.291314 systemd-logind[2106]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 05:54:37.291383 systemd-logind[2106]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 7 05:54:37.292619 systemd-logind[2106]: New seat seat0. Jul 7 05:54:37.294459 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 05:54:37.356461 bash[2211]: Updated "/home/core/.ssh/authorized_keys" Jul 7 05:54:37.357789 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2172) Jul 7 05:54:37.376832 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 05:54:37.466935 systemd[1]: Starting sshkeys.service... Jul 7 05:54:37.546396 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 05:54:37.556144 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 05:54:37.584502 amazon-ssm-agent[2186]: Initializing new seelog logger Jul 7 05:54:37.584502 amazon-ssm-agent[2186]: New Seelog Logger Creation Complete Jul 7 05:54:37.610038 amazon-ssm-agent[2186]: 2025/07/07 05:54:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:54:37.610038 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:54:37.610038 amazon-ssm-agent[2186]: 2025/07/07 05:54:37 processing appconfig overrides Jul 7 05:54:37.610038 amazon-ssm-agent[2186]: 2025/07/07 05:54:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:54:37.610038 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:54:37.610038 amazon-ssm-agent[2186]: 2025/07/07 05:54:37 processing appconfig overrides Jul 7 05:54:37.610038 amazon-ssm-agent[2186]: 2025/07/07 05:54:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:54:37.610038 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:54:37.610038 amazon-ssm-agent[2186]: 2025/07/07 05:54:37 processing appconfig overrides Jul 7 05:54:37.610038 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO Proxy environment variables: Jul 7 05:54:37.618806 amazon-ssm-agent[2186]: 2025/07/07 05:54:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:54:37.618806 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:54:37.619024 amazon-ssm-agent[2186]: 2025/07/07 05:54:37 processing appconfig overrides Jul 7 05:54:37.661985 containerd[2140]: time="2025-07-07T05:54:37.659499241Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 05:54:37.717506 locksmithd[2169]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 05:54:37.724784 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO https_proxy: Jul 7 05:54:37.823625 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO http_proxy: Jul 7 05:54:37.933840 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO no_proxy: Jul 7 05:54:37.944275 containerd[2140]: time="2025-07-07T05:54:37.944155862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:37.962359 containerd[2140]: time="2025-07-07T05:54:37.962138138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:54:37.962359 containerd[2140]: time="2025-07-07T05:54:37.962274734Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 05:54:37.962594 containerd[2140]: time="2025-07-07T05:54:37.962318006Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 05:54:37.971124 containerd[2140]: time="2025-07-07T05:54:37.970285250Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 05:54:37.971124 containerd[2140]: time="2025-07-07T05:54:37.970389422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:37.971124 containerd[2140]: time="2025-07-07T05:54:37.970707734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:54:37.971124 containerd[2140]: time="2025-07-07T05:54:37.970783682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:37.974147 containerd[2140]: time="2025-07-07T05:54:37.973906082Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:54:37.974147 containerd[2140]: time="2025-07-07T05:54:37.973988378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:37.974147 containerd[2140]: time="2025-07-07T05:54:37.974030762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:54:37.974147 containerd[2140]: time="2025-07-07T05:54:37.974083274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:37.980917 containerd[2140]: time="2025-07-07T05:54:37.978438782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:37.981239 containerd[2140]: time="2025-07-07T05:54:37.981186494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:54:37.986522 containerd[2140]: time="2025-07-07T05:54:37.986159126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:54:37.986522 containerd[2140]: time="2025-07-07T05:54:37.986228390Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 05:54:37.990384 containerd[2140]: time="2025-07-07T05:54:37.988381214Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 05:54:37.990384 containerd[2140]: time="2025-07-07T05:54:37.988593398Z" level=info msg="metadata content store policy set" policy=shared Jul 7 05:54:38.010073 containerd[2140]: time="2025-07-07T05:54:38.009158602Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 05:54:38.010073 containerd[2140]: time="2025-07-07T05:54:38.009282634Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 05:54:38.010073 containerd[2140]: time="2025-07-07T05:54:38.009427246Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 05:54:38.010073 containerd[2140]: time="2025-07-07T05:54:38.009498502Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 05:54:38.010073 containerd[2140]: time="2025-07-07T05:54:38.009565210Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 05:54:38.010073 containerd[2140]: time="2025-07-07T05:54:38.009898294Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 05:54:38.012990 containerd[2140]: time="2025-07-07T05:54:38.011351158Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 05:54:38.015170 containerd[2140]: time="2025-07-07T05:54:38.015105106Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 05:54:38.020147 containerd[2140]: time="2025-07-07T05:54:38.017930531Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 05:54:38.020147 containerd[2140]: time="2025-07-07T05:54:38.018086603Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 05:54:38.020147 containerd[2140]: time="2025-07-07T05:54:38.018125711Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 05:54:38.020147 containerd[2140]: time="2025-07-07T05:54:38.018186383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 05:54:38.020147 containerd[2140]: time="2025-07-07T05:54:38.019820639Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 05:54:38.020147 containerd[2140]: time="2025-07-07T05:54:38.019920611Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 05:54:38.020147 containerd[2140]: time="2025-07-07T05:54:38.019988375Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 05:54:38.020147 containerd[2140]: time="2025-07-07T05:54:38.020056439Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 05:54:38.022396 containerd[2140]: time="2025-07-07T05:54:38.020100695Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 05:54:38.023799 containerd[2140]: time="2025-07-07T05:54:38.020676647Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 05:54:38.024553 containerd[2140]: time="2025-07-07T05:54:38.023687423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.024553 containerd[2140]: time="2025-07-07T05:54:38.024196871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.024553 containerd[2140]: time="2025-07-07T05:54:38.024241727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.024553 containerd[2140]: time="2025-07-07T05:54:38.024325811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.024553 containerd[2140]: time="2025-07-07T05:54:38.024390875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.024553 containerd[2140]: time="2025-07-07T05:54:38.024429803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.024553 containerd[2140]: time="2025-07-07T05:54:38.024493391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.027048 containerd[2140]: time="2025-07-07T05:54:38.025798787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.027048 containerd[2140]: time="2025-07-07T05:54:38.025911731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.027048 containerd[2140]: time="2025-07-07T05:54:38.025978187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.027048 containerd[2140]: time="2025-07-07T05:54:38.026013815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.027048 containerd[2140]: time="2025-07-07T05:54:38.026078651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.029563 containerd[2140]: time="2025-07-07T05:54:38.027472811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.029563 containerd[2140]: time="2025-07-07T05:54:38.027585803Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 05:54:38.029563 containerd[2140]: time="2025-07-07T05:54:38.027658007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.029563 containerd[2140]: time="2025-07-07T05:54:38.027698315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.029563 containerd[2140]: time="2025-07-07T05:54:38.027728159Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 05:54:38.029563 containerd[2140]: time="2025-07-07T05:54:38.028034903Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 05:54:38.029563 containerd[2140]: time="2025-07-07T05:54:38.028128311Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 05:54:38.029563 containerd[2140]: time="2025-07-07T05:54:38.028164335Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 05:54:38.029563 containerd[2140]: time="2025-07-07T05:54:38.028198355Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 05:54:38.029563 containerd[2140]: time="2025-07-07T05:54:38.028225259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.029563 containerd[2140]: time="2025-07-07T05:54:38.028257983Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 05:54:38.029563 containerd[2140]: time="2025-07-07T05:54:38.028283699Z" level=info msg="NRI interface is disabled by configuration." Jul 7 05:54:38.039974 containerd[2140]: time="2025-07-07T05:54:38.035409647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 05:54:38.040122 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO Checking if agent identity type OnPrem can be assumed Jul 7 05:54:38.046485 containerd[2140]: time="2025-07-07T05:54:38.043244303Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 05:54:38.046485 containerd[2140]: time="2025-07-07T05:54:38.043519439Z" level=info msg="Connect containerd service" Jul 7 05:54:38.046485 containerd[2140]: time="2025-07-07T05:54:38.043681823Z" level=info msg="using legacy CRI server" Jul 7 05:54:38.046485 containerd[2140]: time="2025-07-07T05:54:38.043721915Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 05:54:38.046485 containerd[2140]: time="2025-07-07T05:54:38.045924275Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 05:54:38.050780 coreos-metadata[2248]: Jul 07 05:54:38.048 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 05:54:38.058905 coreos-metadata[2248]: Jul 07 05:54:38.056 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 7 05:54:38.066077 coreos-metadata[2248]: Jul 07 05:54:38.059 INFO Fetch successful Jul 7 05:54:38.066077 coreos-metadata[2248]: Jul 07 05:54:38.059 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 05:54:38.067572 coreos-metadata[2248]: Jul 07 05:54:38.067 INFO Fetch successful Jul 7 05:54:38.075049 unknown[2248]: wrote ssh authorized keys file for user: core Jul 7 05:54:38.096778 containerd[2140]: time="2025-07-07T05:54:38.094066127Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:54:38.108144 containerd[2140]: time="2025-07-07T05:54:38.097402679Z" level=info msg="Start subscribing containerd event" Jul 7 05:54:38.108144 containerd[2140]: time="2025-07-07T05:54:38.101064179Z" level=info msg="Start recovering state" Jul 7 05:54:38.109093 dbus-daemon[2085]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 05:54:38.109367 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 05:54:38.125015 containerd[2140]: time="2025-07-07T05:54:38.117915011Z" level=info msg="Start event monitor" Jul 7 05:54:38.125015 containerd[2140]: time="2025-07-07T05:54:38.117982727Z" level=info msg="Start snapshots syncer" Jul 7 05:54:38.125015 containerd[2140]: time="2025-07-07T05:54:38.118010999Z" level=info msg="Start cni network conf syncer for default" Jul 7 05:54:38.125015 containerd[2140]: time="2025-07-07T05:54:38.118033343Z" level=info msg="Start streaming server" Jul 7 05:54:38.125015 containerd[2140]: time="2025-07-07T05:54:38.118638155Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 05:54:38.125015 containerd[2140]: time="2025-07-07T05:54:38.118818551Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 05:54:38.125015 containerd[2140]: time="2025-07-07T05:54:38.119030951Z" level=info msg="containerd successfully booted in 0.465101s" Jul 7 05:54:38.120357 dbus-daemon[2085]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2164 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 05:54:38.127277 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 05:54:38.142008 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO Checking if agent identity type EC2 can be assumed Jul 7 05:54:38.149398 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 05:54:38.191779 update-ssh-keys[2315]: Updated "/home/core/.ssh/authorized_keys" Jul 7 05:54:38.194175 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 05:54:38.218584 systemd[1]: Finished sshkeys.service. Jul 7 05:54:38.222091 polkitd[2318]: Started polkitd version 121 Jul 7 05:54:38.240782 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO Agent will take identity from EC2 Jul 7 05:54:38.242616 polkitd[2318]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 05:54:38.242798 polkitd[2318]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 05:54:38.245836 polkitd[2318]: Finished loading, compiling and executing 2 rules Jul 7 05:54:38.250019 dbus-daemon[2085]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 05:54:38.250340 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 05:54:38.253994 polkitd[2318]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 05:54:38.328325 systemd-resolved[2020]: System hostname changed to 'ip-172-31-16-202'. Jul 7 05:54:38.328970 systemd-hostnamed[2164]: Hostname set to (transient) Jul 7 05:54:38.340963 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 7 05:54:38.439589 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 7 05:54:38.539075 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 7 05:54:38.638562 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 7 05:54:38.740450 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 7 05:54:38.843767 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO [amazon-ssm-agent] Starting Core Agent Jul 7 05:54:38.942287 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 7 05:54:39.042807 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO [Registrar] Starting registrar module Jul 7 05:54:39.143291 amazon-ssm-agent[2186]: 2025-07-07 05:54:37 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 7 05:54:39.215637 sshd_keygen[2120]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 05:54:39.314429 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 05:54:39.328461 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 05:54:39.348279 systemd[1]: Started sshd@0-172.31.16.202:22-139.178.89.65:52490.service - OpenSSH per-connection server daemon (139.178.89.65:52490). Jul 7 05:54:39.366024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:54:39.384068 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 05:54:39.384595 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 05:54:39.384605 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:54:39.399304 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 05:54:39.464372 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 05:54:39.485518 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 05:54:39.500444 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 05:54:39.503575 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 05:54:39.508351 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 05:54:39.511876 systemd[1]: Startup finished in 9.860s (kernel) + 10.087s (userspace) = 19.947s. Jul 7 05:54:39.643671 sshd[2362]: Accepted publickey for core from 139.178.89.65 port 52490 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:39.651637 sshd[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:39.676637 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 05:54:39.686583 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 05:54:39.697087 systemd-logind[2106]: New session 1 of user core. Jul 7 05:54:39.735185 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 05:54:39.751694 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 05:54:39.779421 (systemd)[2387]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 05:54:40.062308 systemd[2387]: Queued start job for default target default.target. Jul 7 05:54:40.063039 systemd[2387]: Created slice app.slice - User Application Slice. Jul 7 05:54:40.063080 systemd[2387]: Reached target paths.target - Paths. Jul 7 05:54:40.063112 systemd[2387]: Reached target timers.target - Timers. Jul 7 05:54:40.070914 systemd[2387]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 05:54:40.123427 systemd[2387]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 05:54:40.123541 systemd[2387]: Reached target sockets.target - Sockets. Jul 7 05:54:40.123573 systemd[2387]: Reached target basic.target - Basic System. Jul 7 05:54:40.125665 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 05:54:40.127640 systemd[2387]: Reached target default.target - Main User Target. Jul 7 05:54:40.127730 systemd[2387]: Startup finished in 327ms. Jul 7 05:54:40.133089 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 05:54:40.296901 systemd[1]: Started sshd@1-172.31.16.202:22-139.178.89.65:57076.service - OpenSSH per-connection server daemon (139.178.89.65:57076). Jul 7 05:54:40.505780 amazon-ssm-agent[2186]: 2025-07-07 05:54:40 INFO [EC2Identity] EC2 registration was successful. Jul 7 05:54:40.520984 sshd[2399]: Accepted publickey for core from 139.178.89.65 port 57076 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:40.524475 sshd[2399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:40.534012 systemd-logind[2106]: New session 2 of user core. Jul 7 05:54:40.542809 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 05:54:40.552959 amazon-ssm-agent[2186]: 2025-07-07 05:54:40 INFO [CredentialRefresher] credentialRefresher has started Jul 7 05:54:40.558254 amazon-ssm-agent[2186]: 2025-07-07 05:54:40 INFO [CredentialRefresher] Starting credentials refresher loop Jul 7 05:54:40.558254 amazon-ssm-agent[2186]: 2025-07-07 05:54:40 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 7 05:54:40.606280 amazon-ssm-agent[2186]: 2025-07-07 05:54:40 INFO [CredentialRefresher] Next credential rotation will be in 30.941541254366665 minutes Jul 7 05:54:40.653329 kubelet[2363]: E0707 05:54:40.653247 2363 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:54:40.659079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:54:40.660724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:54:40.701148 sshd[2399]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:40.708305 systemd-logind[2106]: Session 2 logged out. Waiting for processes to exit. Jul 7 05:54:40.708988 systemd[1]: sshd@1-172.31.16.202:22-139.178.89.65:57076.service: Deactivated successfully. Jul 7 05:54:40.714553 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 05:54:40.716436 systemd-logind[2106]: Removed session 2. Jul 7 05:54:40.734232 systemd[1]: Started sshd@2-172.31.16.202:22-139.178.89.65:57088.service - OpenSSH per-connection server daemon (139.178.89.65:57088). Jul 7 05:54:40.903298 sshd[2412]: Accepted publickey for core from 139.178.89.65 port 57088 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:40.906515 sshd[2412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:40.915497 systemd-logind[2106]: New session 3 of user core. Jul 7 05:54:40.925374 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 05:54:41.046071 sshd[2412]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:41.052919 systemd[1]: sshd@2-172.31.16.202:22-139.178.89.65:57088.service: Deactivated successfully. Jul 7 05:54:41.057639 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 05:54:41.059701 systemd-logind[2106]: Session 3 logged out. Waiting for processes to exit. Jul 7 05:54:41.061586 systemd-logind[2106]: Removed session 3. Jul 7 05:54:41.077255 systemd[1]: Started sshd@3-172.31.16.202:22-139.178.89.65:57096.service - OpenSSH per-connection server daemon (139.178.89.65:57096). Jul 7 05:54:41.246218 sshd[2420]: Accepted publickey for core from 139.178.89.65 port 57096 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:41.249441 sshd[2420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:41.259139 systemd-logind[2106]: New session 4 of user core. Jul 7 05:54:41.265493 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 05:54:41.395140 sshd[2420]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:41.401571 systemd[1]: sshd@3-172.31.16.202:22-139.178.89.65:57096.service: Deactivated successfully. Jul 7 05:54:41.407730 systemd-logind[2106]: Session 4 logged out. Waiting for processes to exit. Jul 7 05:54:41.407986 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 05:54:41.411730 systemd-logind[2106]: Removed session 4. Jul 7 05:54:41.426286 systemd[1]: Started sshd@4-172.31.16.202:22-139.178.89.65:57098.service - OpenSSH per-connection server daemon (139.178.89.65:57098). Jul 7 05:54:41.587233 amazon-ssm-agent[2186]: 2025-07-07 05:54:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 7 05:54:41.611962 sshd[2428]: Accepted publickey for core from 139.178.89.65 port 57098 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:41.614707 sshd[2428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:41.627111 systemd-logind[2106]: New session 5 of user core. Jul 7 05:54:41.631329 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 05:54:41.687600 amazon-ssm-agent[2186]: 2025-07-07 05:54:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2431) started Jul 7 05:54:41.787829 amazon-ssm-agent[2186]: 2025-07-07 05:54:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 7 05:54:41.809167 sudo[2439]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 05:54:41.810109 sudo[2439]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:54:41.828069 sudo[2439]: pam_unix(sudo:session): session closed for user root Jul 7 05:54:41.854618 sshd[2428]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:41.864158 systemd[1]: sshd@4-172.31.16.202:22-139.178.89.65:57098.service: Deactivated successfully. Jul 7 05:54:41.865042 systemd-logind[2106]: Session 5 logged out. Waiting for processes to exit. Jul 7 05:54:41.869807 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 05:54:41.872504 systemd-logind[2106]: Removed session 5. Jul 7 05:54:41.888335 systemd[1]: Started sshd@5-172.31.16.202:22-139.178.89.65:57114.service - OpenSSH per-connection server daemon (139.178.89.65:57114). Jul 7 05:54:42.060926 sshd[2447]: Accepted publickey for core from 139.178.89.65 port 57114 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:42.063669 sshd[2447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:42.071285 systemd-logind[2106]: New session 6 of user core. Jul 7 05:54:42.084438 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 05:54:42.193781 sudo[2452]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 05:54:42.194446 sudo[2452]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:54:42.201337 sudo[2452]: pam_unix(sudo:session): session closed for user root Jul 7 05:54:42.211005 sudo[2451]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 05:54:42.211616 sudo[2451]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:54:42.240238 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 05:54:42.243291 auditctl[2455]: No rules Jul 7 05:54:42.244157 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 05:54:42.244657 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 05:54:42.257066 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:54:42.305649 augenrules[2474]: No rules Jul 7 05:54:42.309706 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:54:42.314264 sudo[2451]: pam_unix(sudo:session): session closed for user root Jul 7 05:54:42.338987 sshd[2447]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:42.344020 systemd[1]: sshd@5-172.31.16.202:22-139.178.89.65:57114.service: Deactivated successfully. Jul 7 05:54:42.352446 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 05:54:42.353815 systemd-logind[2106]: Session 6 logged out. Waiting for processes to exit. Jul 7 05:54:42.355946 systemd-logind[2106]: Removed session 6. Jul 7 05:54:42.368403 systemd[1]: Started sshd@6-172.31.16.202:22-139.178.89.65:57122.service - OpenSSH per-connection server daemon (139.178.89.65:57122). Jul 7 05:54:42.548324 sshd[2483]: Accepted publickey for core from 139.178.89.65 port 57122 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:42.551104 sshd[2483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:42.559498 systemd-logind[2106]: New session 7 of user core. Jul 7 05:54:42.571666 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 05:54:42.681241 sudo[2487]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 05:54:42.682062 sudo[2487]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:54:43.606488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:54:43.619219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:54:43.676155 systemd[1]: Reloading requested from client PID 2521 ('systemctl') (unit session-7.scope)... Jul 7 05:54:43.676184 systemd[1]: Reloading... Jul 7 05:54:43.893466 zram_generator::config[2567]: No configuration found. Jul 7 05:54:44.153547 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:54:44.323988 systemd[1]: Reloading finished in 646 ms. Jul 7 05:54:44.423928 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 05:54:44.424176 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 05:54:44.424859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:54:44.442462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:54:44.755071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:54:44.767463 (kubelet)[2637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:54:44.848791 kubelet[2637]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:54:44.848791 kubelet[2637]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:54:44.848791 kubelet[2637]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:54:44.848791 kubelet[2637]: I0707 05:54:44.848282 2637 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:54:45.759657 kubelet[2637]: I0707 05:54:45.759610 2637 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:54:45.759976 kubelet[2637]: I0707 05:54:45.759956 2637 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:54:45.760654 kubelet[2637]: I0707 05:54:45.760492 2637 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:54:45.812777 kubelet[2637]: I0707 05:54:45.812652 2637 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:54:45.825082 kubelet[2637]: E0707 05:54:45.825015 2637 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:54:45.825082 kubelet[2637]: I0707 05:54:45.825069 2637 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:54:45.831853 kubelet[2637]: I0707 05:54:45.831806 2637 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:54:45.832925 kubelet[2637]: I0707 05:54:45.832877 2637 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:54:45.834113 kubelet[2637]: I0707 05:54:45.833106 2637 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:54:45.834113 kubelet[2637]: I0707 05:54:45.833165 2637 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.16.202","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 05:54:45.834113 kubelet[2637]: I0707 05:54:45.833443 2637 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:54:45.834113 kubelet[2637]: I0707 05:54:45.833461 2637 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:54:45.834492 kubelet[2637]: I0707 05:54:45.833800 2637 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:54:45.837874 kubelet[2637]: I0707 05:54:45.837824 2637 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:54:45.837874 kubelet[2637]: I0707 05:54:45.837877 2637 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:54:45.838028 kubelet[2637]: I0707 05:54:45.837913 2637 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:54:45.838028 kubelet[2637]: I0707 05:54:45.837943 2637 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:54:45.838474 kubelet[2637]: E0707 05:54:45.838446 2637 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:45.838652 kubelet[2637]: E0707 05:54:45.838630 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:45.844536 kubelet[2637]: I0707 05:54:45.844492 2637 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:54:45.845825 kubelet[2637]: I0707 05:54:45.845767 2637 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:54:45.846029 kubelet[2637]: W0707 05:54:45.845988 2637 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 05:54:45.848258 kubelet[2637]: I0707 05:54:45.847900 2637 server.go:1274] "Started kubelet" Jul 7 05:54:45.852529 kubelet[2637]: I0707 05:54:45.852489 2637 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:54:45.863468 kubelet[2637]: I0707 05:54:45.861665 2637 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:54:45.864088 kubelet[2637]: I0707 05:54:45.864052 2637 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:54:45.872443 kubelet[2637]: I0707 05:54:45.872338 2637 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:54:45.872831 kubelet[2637]: I0707 05:54:45.872799 2637 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:54:45.874102 kubelet[2637]: I0707 05:54:45.874046 2637 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:54:45.878148 kubelet[2637]: I0707 05:54:45.878095 2637 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:54:45.878683 kubelet[2637]: E0707 05:54:45.878626 2637 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.202\" not found" Jul 7 05:54:45.881711 kubelet[2637]: I0707 05:54:45.881336 2637 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:54:45.881711 kubelet[2637]: I0707 05:54:45.881441 2637 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:54:45.890169 kubelet[2637]: I0707 05:54:45.890087 2637 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:54:45.894148 kubelet[2637]: E0707 05:54:45.881579 2637 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.16.202.184fe260e472853d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.16.202,UID:172.31.16.202,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.16.202,},FirstTimestamp:2025-07-07 05:54:45.847860541 +0000 UTC m=+1.071368852,LastTimestamp:2025-07-07 05:54:45.847860541 +0000 UTC m=+1.071368852,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.16.202,}" Jul 7 05:54:45.897272 kubelet[2637]: E0707 05:54:45.897170 2637 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:54:45.898541 kubelet[2637]: W0707 05:54:45.898485 2637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.16.202" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 7 05:54:45.898944 kubelet[2637]: E0707 05:54:45.898905 2637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.16.202\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 7 05:54:45.899020 kubelet[2637]: W0707 05:54:45.897725 2637 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 7 05:54:45.899075 kubelet[2637]: E0707 05:54:45.899032 2637 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 7 05:54:45.904854 kubelet[2637]: I0707 05:54:45.904677 2637 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:54:45.904854 kubelet[2637]: I0707 05:54:45.904721 2637 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:54:45.952392 kubelet[2637]: E0707 05:54:45.952328 2637 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.16.202\" not found" node="172.31.16.202" Jul 7 05:54:45.962868 kubelet[2637]: I0707 05:54:45.962824 2637 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:54:45.962868 kubelet[2637]: I0707 05:54:45.962857 2637 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:54:45.962868 kubelet[2637]: I0707 05:54:45.962891 2637 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:54:45.970418 kubelet[2637]: I0707 05:54:45.970382 2637 policy_none.go:49] "None policy: Start" Jul 7 05:54:45.972581 kubelet[2637]: I0707 05:54:45.972076 2637 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:54:45.972581 kubelet[2637]: I0707 05:54:45.972117 2637 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:54:45.979239 kubelet[2637]: E0707 05:54:45.979198 2637 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.202\" not found" Jul 7 05:54:45.984794 kubelet[2637]: I0707 05:54:45.983834 2637 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:54:45.984794 kubelet[2637]: I0707 05:54:45.984115 2637 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:54:45.984794 kubelet[2637]: I0707 05:54:45.984135 2637 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:54:45.987861 kubelet[2637]: I0707 05:54:45.987830 2637 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:54:45.994398 kubelet[2637]: E0707 05:54:45.994358 2637 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.202\" not found" Jul 7 05:54:45.999690 kubelet[2637]: I0707 05:54:45.999428 2637 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:54:46.001802 kubelet[2637]: I0707 05:54:46.001697 2637 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:54:46.002159 kubelet[2637]: I0707 05:54:46.001834 2637 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:54:46.002159 kubelet[2637]: I0707 05:54:46.001868 2637 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:54:46.002159 kubelet[2637]: E0707 05:54:46.001972 2637 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 7 05:54:46.086140 kubelet[2637]: I0707 05:54:46.085465 2637 kubelet_node_status.go:72] "Attempting to register node" node="172.31.16.202" Jul 7 05:54:46.096760 kubelet[2637]: I0707 05:54:46.096479 2637 kubelet_node_status.go:75] "Successfully registered node" node="172.31.16.202" Jul 7 05:54:46.096760 kubelet[2637]: E0707 05:54:46.096531 2637 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.16.202\": node \"172.31.16.202\" not found" Jul 7 05:54:46.142014 kubelet[2637]: E0707 05:54:46.141960 2637 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.202\" not found" Jul 7 05:54:46.242834 kubelet[2637]: E0707 05:54:46.242772 2637 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.202\" not found" Jul 7 05:54:46.343674 kubelet[2637]: E0707 05:54:46.343533 2637 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.202\" not found" Jul 7 05:54:46.349800 sudo[2487]: pam_unix(sudo:session): session closed for user root Jul 7 05:54:46.374030 sshd[2483]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:46.379112 systemd[1]: sshd@6-172.31.16.202:22-139.178.89.65:57122.service: Deactivated successfully. Jul 7 05:54:46.387469 systemd-logind[2106]: Session 7 logged out. Waiting for processes to exit. Jul 7 05:54:46.388511 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 05:54:46.391674 systemd-logind[2106]: Removed session 7. Jul 7 05:54:46.444672 kubelet[2637]: E0707 05:54:46.444618 2637 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.202\" not found" Jul 7 05:54:46.545250 kubelet[2637]: E0707 05:54:46.545192 2637 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.202\" not found" Jul 7 05:54:46.645864 kubelet[2637]: E0707 05:54:46.645726 2637 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.202\" not found" Jul 7 05:54:46.746359 kubelet[2637]: E0707 05:54:46.746304 2637 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.202\" not found" Jul 7 05:54:46.770789 kubelet[2637]: I0707 05:54:46.770525 2637 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 7 05:54:46.771239 kubelet[2637]: W0707 05:54:46.770809 2637 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 7 05:54:46.771239 kubelet[2637]: W0707 05:54:46.771191 2637 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 7 05:54:46.839479 kubelet[2637]: E0707 05:54:46.839425 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:46.847137 kubelet[2637]: E0707 05:54:46.847104 2637 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.202\" not found" Jul 7 05:54:46.949075 kubelet[2637]: I0707 05:54:46.948782 2637 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 7 05:54:46.950233 kubelet[2637]: I0707 05:54:46.949718 2637 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 7 05:54:46.950326 containerd[2140]: time="2025-07-07T05:54:46.949364701Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 05:54:47.838649 kubelet[2637]: I0707 05:54:47.838527 2637 apiserver.go:52] "Watching apiserver" Jul 7 05:54:47.839694 kubelet[2637]: E0707 05:54:47.839651 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:47.851771 kubelet[2637]: E0707 05:54:47.850639 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-npz5f" podUID="91c80da1-8133-4ef3-be15-3ede4b1f00b5" Jul 7 05:54:47.882135 kubelet[2637]: I0707 05:54:47.882101 2637 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:54:47.897126 kubelet[2637]: I0707 05:54:47.897064 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d5afacbd-2243-46f0-bcc3-5006d4b1a256-flexvol-driver-host\") pod \"calico-node-jxj7d\" (UID: \"d5afacbd-2243-46f0-bcc3-5006d4b1a256\") " pod="calico-system/calico-node-jxj7d" Jul 7 05:54:47.898621 kubelet[2637]: I0707 05:54:47.898574 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d5afacbd-2243-46f0-bcc3-5006d4b1a256-policysync\") pod \"calico-node-jxj7d\" (UID: \"d5afacbd-2243-46f0-bcc3-5006d4b1a256\") " pod="calico-system/calico-node-jxj7d" Jul 7 05:54:47.898727 kubelet[2637]: I0707 05:54:47.898652 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d5afacbd-2243-46f0-bcc3-5006d4b1a256-var-lib-calico\") pod \"calico-node-jxj7d\" (UID: \"d5afacbd-2243-46f0-bcc3-5006d4b1a256\") " pod="calico-system/calico-node-jxj7d" Jul 7 05:54:47.898727 kubelet[2637]: I0707 05:54:47.898693 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt5p2\" (UniqueName: \"kubernetes.io/projected/0a94cdef-64df-4b07-9ae0-a2ac08709fde-kube-api-access-lt5p2\") pod \"kube-proxy-bnt4b\" (UID: \"0a94cdef-64df-4b07-9ae0-a2ac08709fde\") " pod="kube-system/kube-proxy-bnt4b" Jul 7 05:54:47.898979 kubelet[2637]: I0707 05:54:47.898931 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4pwc\" (UniqueName: \"kubernetes.io/projected/91c80da1-8133-4ef3-be15-3ede4b1f00b5-kube-api-access-x4pwc\") pod \"csi-node-driver-npz5f\" (UID: \"91c80da1-8133-4ef3-be15-3ede4b1f00b5\") " pod="calico-system/csi-node-driver-npz5f" Jul 7 05:54:47.899075 kubelet[2637]: I0707 05:54:47.899054 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a94cdef-64df-4b07-9ae0-a2ac08709fde-xtables-lock\") pod \"kube-proxy-bnt4b\" (UID: \"0a94cdef-64df-4b07-9ae0-a2ac08709fde\") " pod="kube-system/kube-proxy-bnt4b" Jul 7 05:54:47.899180 kubelet[2637]: I0707 05:54:47.899095 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a94cdef-64df-4b07-9ae0-a2ac08709fde-lib-modules\") pod \"kube-proxy-bnt4b\" (UID: \"0a94cdef-64df-4b07-9ae0-a2ac08709fde\") " pod="kube-system/kube-proxy-bnt4b" Jul 7 05:54:47.899180 kubelet[2637]: I0707 05:54:47.899129 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5afacbd-2243-46f0-bcc3-5006d4b1a256-lib-modules\") pod \"calico-node-jxj7d\" (UID: \"d5afacbd-2243-46f0-bcc3-5006d4b1a256\") " pod="calico-system/calico-node-jxj7d" Jul 7 05:54:47.899441 kubelet[2637]: I0707 05:54:47.899180 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5afacbd-2243-46f0-bcc3-5006d4b1a256-xtables-lock\") pod \"calico-node-jxj7d\" (UID: \"d5afacbd-2243-46f0-bcc3-5006d4b1a256\") " pod="calico-system/calico-node-jxj7d" Jul 7 05:54:47.899441 kubelet[2637]: I0707 05:54:47.899222 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/91c80da1-8133-4ef3-be15-3ede4b1f00b5-registration-dir\") pod \"csi-node-driver-npz5f\" (UID: \"91c80da1-8133-4ef3-be15-3ede4b1f00b5\") " pod="calico-system/csi-node-driver-npz5f" Jul 7 05:54:47.899441 kubelet[2637]: I0707 05:54:47.899268 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/91c80da1-8133-4ef3-be15-3ede4b1f00b5-socket-dir\") pod \"csi-node-driver-npz5f\" (UID: \"91c80da1-8133-4ef3-be15-3ede4b1f00b5\") " pod="calico-system/csi-node-driver-npz5f" Jul 7 05:54:47.899441 kubelet[2637]: I0707 05:54:47.899315 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/91c80da1-8133-4ef3-be15-3ede4b1f00b5-varrun\") pod \"csi-node-driver-npz5f\" (UID: \"91c80da1-8133-4ef3-be15-3ede4b1f00b5\") " pod="calico-system/csi-node-driver-npz5f" Jul 7 05:54:47.899441 kubelet[2637]: I0707 05:54:47.899360 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d5afacbd-2243-46f0-bcc3-5006d4b1a256-cni-net-dir\") pod \"calico-node-jxj7d\" (UID: \"d5afacbd-2243-46f0-bcc3-5006d4b1a256\") " pod="calico-system/calico-node-jxj7d" Jul 7 05:54:47.899699 kubelet[2637]: I0707 05:54:47.899396 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d5afacbd-2243-46f0-bcc3-5006d4b1a256-node-certs\") pod \"calico-node-jxj7d\" (UID: \"d5afacbd-2243-46f0-bcc3-5006d4b1a256\") " pod="calico-system/calico-node-jxj7d" Jul 7 05:54:47.899699 kubelet[2637]: I0707 05:54:47.899430 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d5afacbd-2243-46f0-bcc3-5006d4b1a256-var-run-calico\") pod \"calico-node-jxj7d\" (UID: \"d5afacbd-2243-46f0-bcc3-5006d4b1a256\") " pod="calico-system/calico-node-jxj7d" Jul 7 05:54:47.899699 kubelet[2637]: I0707 05:54:47.899463 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91c80da1-8133-4ef3-be15-3ede4b1f00b5-kubelet-dir\") pod \"csi-node-driver-npz5f\" (UID: \"91c80da1-8133-4ef3-be15-3ede4b1f00b5\") " pod="calico-system/csi-node-driver-npz5f" Jul 7 05:54:47.899699 kubelet[2637]: I0707 05:54:47.899497 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a94cdef-64df-4b07-9ae0-a2ac08709fde-kube-proxy\") pod \"kube-proxy-bnt4b\" (UID: \"0a94cdef-64df-4b07-9ae0-a2ac08709fde\") " pod="kube-system/kube-proxy-bnt4b" Jul 7 05:54:47.899699 kubelet[2637]: I0707 05:54:47.899533 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d5afacbd-2243-46f0-bcc3-5006d4b1a256-cni-bin-dir\") pod \"calico-node-jxj7d\" (UID: \"d5afacbd-2243-46f0-bcc3-5006d4b1a256\") " pod="calico-system/calico-node-jxj7d" Jul 7 05:54:47.899971 kubelet[2637]: I0707 05:54:47.899586 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d5afacbd-2243-46f0-bcc3-5006d4b1a256-cni-log-dir\") pod \"calico-node-jxj7d\" (UID: \"d5afacbd-2243-46f0-bcc3-5006d4b1a256\") " pod="calico-system/calico-node-jxj7d" Jul 7 05:54:47.899971 kubelet[2637]: I0707 05:54:47.899631 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5afacbd-2243-46f0-bcc3-5006d4b1a256-tigera-ca-bundle\") pod \"calico-node-jxj7d\" (UID: \"d5afacbd-2243-46f0-bcc3-5006d4b1a256\") " pod="calico-system/calico-node-jxj7d" Jul 7 05:54:47.899971 kubelet[2637]: I0707 05:54:47.899677 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hflk7\" (UniqueName: \"kubernetes.io/projected/d5afacbd-2243-46f0-bcc3-5006d4b1a256-kube-api-access-hflk7\") pod \"calico-node-jxj7d\" (UID: \"d5afacbd-2243-46f0-bcc3-5006d4b1a256\") " pod="calico-system/calico-node-jxj7d" Jul 7 05:54:48.007473 kubelet[2637]: E0707 05:54:48.006877 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.007473 kubelet[2637]: W0707 05:54:48.006939 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.007473 kubelet[2637]: E0707 05:54:48.007097 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.009947 kubelet[2637]: E0707 05:54:48.008637 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.009947 kubelet[2637]: W0707 05:54:48.008679 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.009947 kubelet[2637]: E0707 05:54:48.009870 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.010378 kubelet[2637]: E0707 05:54:48.010238 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.010452 kubelet[2637]: W0707 05:54:48.010259 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.010726 kubelet[2637]: E0707 05:54:48.010641 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.011451 kubelet[2637]: E0707 05:54:48.011403 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.011451 kubelet[2637]: W0707 05:54:48.011438 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.011730 kubelet[2637]: E0707 05:54:48.011670 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.018703 kubelet[2637]: E0707 05:54:48.018634 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.018703 kubelet[2637]: W0707 05:54:48.018692 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.019064 kubelet[2637]: E0707 05:54:48.018815 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.022565 kubelet[2637]: E0707 05:54:48.021225 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.022565 kubelet[2637]: W0707 05:54:48.021259 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.022565 kubelet[2637]: E0707 05:54:48.021705 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.022565 kubelet[2637]: W0707 05:54:48.021722 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.022947 kubelet[2637]: E0707 05:54:48.022879 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.022947 kubelet[2637]: W0707 05:54:48.022937 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.023163 kubelet[2637]: E0707 05:54:48.022969 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.023163 kubelet[2637]: E0707 05:54:48.023036 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.023786 kubelet[2637]: E0707 05:54:48.023293 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.023786 kubelet[2637]: E0707 05:54:48.023536 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.023786 kubelet[2637]: W0707 05:54:48.023556 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.023786 kubelet[2637]: E0707 05:54:48.023579 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.024783 kubelet[2637]: E0707 05:54:48.024181 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.024783 kubelet[2637]: W0707 05:54:48.024211 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.024783 kubelet[2637]: E0707 05:54:48.024260 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.042182 kubelet[2637]: E0707 05:54:48.042132 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.042182 kubelet[2637]: W0707 05:54:48.042171 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.042380 kubelet[2637]: E0707 05:54:48.042204 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.058926 kubelet[2637]: E0707 05:54:48.058139 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.058926 kubelet[2637]: W0707 05:54:48.058180 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.058926 kubelet[2637]: E0707 05:54:48.058234 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.071527 kubelet[2637]: E0707 05:54:48.069935 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:48.071527 kubelet[2637]: W0707 05:54:48.069995 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:48.071527 kubelet[2637]: E0707 05:54:48.070028 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:48.155870 containerd[2140]: time="2025-07-07T05:54:48.155346879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bnt4b,Uid:0a94cdef-64df-4b07-9ae0-a2ac08709fde,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:48.160383 containerd[2140]: time="2025-07-07T05:54:48.160253649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jxj7d,Uid:d5afacbd-2243-46f0-bcc3-5006d4b1a256,Namespace:calico-system,Attempt:0,}" Jul 7 05:54:48.787706 containerd[2140]: time="2025-07-07T05:54:48.787626543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:54:48.791790 containerd[2140]: time="2025-07-07T05:54:48.790276198Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:54:48.799552 containerd[2140]: time="2025-07-07T05:54:48.799505089Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 7 05:54:48.799953 containerd[2140]: time="2025-07-07T05:54:48.799911446Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:54:48.800723 containerd[2140]: time="2025-07-07T05:54:48.800686378Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:54:48.805177 containerd[2140]: time="2025-07-07T05:54:48.805103528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:54:48.809459 containerd[2140]: time="2025-07-07T05:54:48.809395713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 648.734964ms" Jul 7 05:54:48.812356 containerd[2140]: time="2025-07-07T05:54:48.812302183Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 656.783886ms" Jul 7 05:54:48.840114 kubelet[2637]: E0707 05:54:48.840046 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:49.026633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540277970.mount: Deactivated successfully. Jul 7 05:54:49.091153 containerd[2140]: time="2025-07-07T05:54:49.090081593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:49.091153 containerd[2140]: time="2025-07-07T05:54:49.090183890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:49.091153 containerd[2140]: time="2025-07-07T05:54:49.090442876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:49.092002 containerd[2140]: time="2025-07-07T05:54:49.090497329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:49.092171 containerd[2140]: time="2025-07-07T05:54:49.092063721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:49.093647 containerd[2140]: time="2025-07-07T05:54:49.092211799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:49.094717 containerd[2140]: time="2025-07-07T05:54:49.094445982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:49.095146 containerd[2140]: time="2025-07-07T05:54:49.095037178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:49.323150 containerd[2140]: time="2025-07-07T05:54:49.323098387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bnt4b,Uid:0a94cdef-64df-4b07-9ae0-a2ac08709fde,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dcffdef18462e2ae2480da039434e7c8f736d9c32a8735a66e95861edcf9170\"" Jul 7 05:54:49.328076 containerd[2140]: time="2025-07-07T05:54:49.328024587Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 05:54:49.334991 containerd[2140]: time="2025-07-07T05:54:49.334922732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jxj7d,Uid:d5afacbd-2243-46f0-bcc3-5006d4b1a256,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc19851f3e20554271a0d718dd79df33927d8341776bd64cf326ddadc893b4b8\"" Jul 7 05:54:49.841104 kubelet[2637]: E0707 05:54:49.841039 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:50.004156 kubelet[2637]: E0707 05:54:50.002946 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-npz5f" podUID="91c80da1-8133-4ef3-be15-3ede4b1f00b5" Jul 7 05:54:50.576473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22662587.mount: Deactivated successfully. Jul 7 05:54:50.842659 kubelet[2637]: E0707 05:54:50.842196 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:51.147846 containerd[2140]: time="2025-07-07T05:54:51.146899604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:51.149113 containerd[2140]: time="2025-07-07T05:54:51.148787903Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 7 05:54:51.151348 containerd[2140]: time="2025-07-07T05:54:51.151257060Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:51.157041 containerd[2140]: time="2025-07-07T05:54:51.156941928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:51.158633 containerd[2140]: time="2025-07-07T05:54:51.158359859Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.830056916s" Jul 7 05:54:51.158633 containerd[2140]: time="2025-07-07T05:54:51.158420884Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 7 05:54:51.160220 containerd[2140]: time="2025-07-07T05:54:51.160054419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 05:54:51.162724 containerd[2140]: time="2025-07-07T05:54:51.162452295Z" level=info msg="CreateContainer within sandbox \"5dcffdef18462e2ae2480da039434e7c8f736d9c32a8735a66e95861edcf9170\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 05:54:51.202421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078928850.mount: Deactivated successfully. Jul 7 05:54:51.203790 containerd[2140]: time="2025-07-07T05:54:51.202935452Z" level=info msg="CreateContainer within sandbox \"5dcffdef18462e2ae2480da039434e7c8f736d9c32a8735a66e95861edcf9170\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"40a4330f7c47f9453a3a5881f0c785439f4b2b473680227798556ba03e85860b\"" Jul 7 05:54:51.206997 containerd[2140]: time="2025-07-07T05:54:51.206467445Z" level=info msg="StartContainer for \"40a4330f7c47f9453a3a5881f0c785439f4b2b473680227798556ba03e85860b\"" Jul 7 05:54:51.307021 containerd[2140]: time="2025-07-07T05:54:51.306920254Z" level=info msg="StartContainer for \"40a4330f7c47f9453a3a5881f0c785439f4b2b473680227798556ba03e85860b\" returns successfully" Jul 7 05:54:51.843030 kubelet[2637]: E0707 05:54:51.842918 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:52.003170 kubelet[2637]: E0707 05:54:52.003098 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-npz5f" podUID="91c80da1-8133-4ef3-be15-3ede4b1f00b5" Jul 7 05:54:52.058883 kubelet[2637]: I0707 05:54:52.058717 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bnt4b" podStartSLOduration=4.225964658 podStartE2EDuration="6.058697783s" podCreationTimestamp="2025-07-07 05:54:46 +0000 UTC" firstStartedPulling="2025-07-07 05:54:49.327024131 +0000 UTC m=+4.550532442" lastFinishedPulling="2025-07-07 05:54:51.159757256 +0000 UTC m=+6.383265567" observedRunningTime="2025-07-07 05:54:52.058633112 +0000 UTC m=+7.282141446" watchObservedRunningTime="2025-07-07 05:54:52.058697783 +0000 UTC m=+7.282206094" Jul 7 05:54:52.108782 kubelet[2637]: E0707 05:54:52.108481 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.108782 kubelet[2637]: W0707 05:54:52.108511 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.108782 kubelet[2637]: E0707 05:54:52.108542 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.109420 kubelet[2637]: E0707 05:54:52.109280 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.109420 kubelet[2637]: W0707 05:54:52.109306 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.109420 kubelet[2637]: E0707 05:54:52.109333 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.110141 kubelet[2637]: E0707 05:54:52.109986 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.110141 kubelet[2637]: W0707 05:54:52.110009 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.110141 kubelet[2637]: E0707 05:54:52.110032 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.110828 kubelet[2637]: E0707 05:54:52.110667 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.110828 kubelet[2637]: W0707 05:54:52.110687 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.110828 kubelet[2637]: E0707 05:54:52.110707 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.111547 kubelet[2637]: E0707 05:54:52.111390 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.111547 kubelet[2637]: W0707 05:54:52.111411 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.111547 kubelet[2637]: E0707 05:54:52.111431 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.112153 kubelet[2637]: E0707 05:54:52.112028 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.112153 kubelet[2637]: W0707 05:54:52.112047 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.112153 kubelet[2637]: E0707 05:54:52.112068 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.112790 kubelet[2637]: E0707 05:54:52.112698 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.112790 kubelet[2637]: W0707 05:54:52.112718 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.113010 kubelet[2637]: E0707 05:54:52.112764 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.113525 kubelet[2637]: E0707 05:54:52.113376 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.113525 kubelet[2637]: W0707 05:54:52.113396 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.113525 kubelet[2637]: E0707 05:54:52.113417 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.114229 kubelet[2637]: E0707 05:54:52.114113 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.114229 kubelet[2637]: W0707 05:54:52.114134 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.114229 kubelet[2637]: E0707 05:54:52.114156 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.114808 kubelet[2637]: E0707 05:54:52.114652 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.114808 kubelet[2637]: W0707 05:54:52.114671 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.114808 kubelet[2637]: E0707 05:54:52.114691 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.115428 kubelet[2637]: E0707 05:54:52.115301 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.115428 kubelet[2637]: W0707 05:54:52.115322 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.115428 kubelet[2637]: E0707 05:54:52.115342 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.115992 kubelet[2637]: E0707 05:54:52.115868 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.115992 kubelet[2637]: W0707 05:54:52.115888 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.115992 kubelet[2637]: E0707 05:54:52.115907 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.116532 kubelet[2637]: E0707 05:54:52.116432 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.116532 kubelet[2637]: W0707 05:54:52.116451 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.116532 kubelet[2637]: E0707 05:54:52.116470 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.117113 kubelet[2637]: E0707 05:54:52.116956 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.117113 kubelet[2637]: W0707 05:54:52.116975 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.117113 kubelet[2637]: E0707 05:54:52.116994 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.117537 kubelet[2637]: E0707 05:54:52.117411 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.117537 kubelet[2637]: W0707 05:54:52.117445 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.117537 kubelet[2637]: E0707 05:54:52.117468 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.118138 kubelet[2637]: E0707 05:54:52.117979 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.118138 kubelet[2637]: W0707 05:54:52.117998 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.118138 kubelet[2637]: E0707 05:54:52.118017 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.118969 kubelet[2637]: E0707 05:54:52.118729 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.118969 kubelet[2637]: W0707 05:54:52.118808 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.118969 kubelet[2637]: E0707 05:54:52.118835 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.119379 kubelet[2637]: E0707 05:54:52.119208 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.119379 kubelet[2637]: W0707 05:54:52.119227 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.119379 kubelet[2637]: E0707 05:54:52.119247 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.119663 kubelet[2637]: E0707 05:54:52.119644 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.119786 kubelet[2637]: W0707 05:54:52.119735 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.120007 kubelet[2637]: E0707 05:54:52.119863 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.120539 kubelet[2637]: E0707 05:54:52.120448 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.120539 kubelet[2637]: W0707 05:54:52.120501 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.120539 kubelet[2637]: E0707 05:54:52.120599 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.125143 kubelet[2637]: E0707 05:54:52.125105 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.125143 kubelet[2637]: W0707 05:54:52.125138 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.125310 kubelet[2637]: E0707 05:54:52.125169 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.125635 kubelet[2637]: E0707 05:54:52.125607 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.125793 kubelet[2637]: W0707 05:54:52.125634 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.125793 kubelet[2637]: E0707 05:54:52.125668 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.126072 kubelet[2637]: E0707 05:54:52.126046 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.126135 kubelet[2637]: W0707 05:54:52.126072 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.126135 kubelet[2637]: E0707 05:54:52.126103 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.126461 kubelet[2637]: E0707 05:54:52.126436 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.126556 kubelet[2637]: W0707 05:54:52.126461 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.126556 kubelet[2637]: E0707 05:54:52.126493 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.126841 kubelet[2637]: E0707 05:54:52.126815 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.126902 kubelet[2637]: W0707 05:54:52.126841 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.126968 kubelet[2637]: E0707 05:54:52.126951 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.127231 kubelet[2637]: E0707 05:54:52.127206 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.127310 kubelet[2637]: W0707 05:54:52.127230 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.127310 kubelet[2637]: E0707 05:54:52.127259 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.127613 kubelet[2637]: E0707 05:54:52.127587 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.127687 kubelet[2637]: W0707 05:54:52.127616 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.127687 kubelet[2637]: E0707 05:54:52.127649 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.127980 kubelet[2637]: E0707 05:54:52.127955 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.128060 kubelet[2637]: W0707 05:54:52.127980 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.128060 kubelet[2637]: E0707 05:54:52.128016 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.128453 kubelet[2637]: E0707 05:54:52.128424 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.128530 kubelet[2637]: W0707 05:54:52.128453 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.128806 kubelet[2637]: E0707 05:54:52.128668 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.129181 kubelet[2637]: E0707 05:54:52.129137 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.129181 kubelet[2637]: W0707 05:54:52.129167 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.129332 kubelet[2637]: E0707 05:54:52.129206 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.129970 kubelet[2637]: E0707 05:54:52.129774 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.129970 kubelet[2637]: W0707 05:54:52.129799 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.129970 kubelet[2637]: E0707 05:54:52.129840 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.130205 kubelet[2637]: E0707 05:54:52.130178 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 05:54:52.130294 kubelet[2637]: W0707 05:54:52.130205 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 05:54:52.130294 kubelet[2637]: E0707 05:54:52.130228 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 05:54:52.527658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508113392.mount: Deactivated successfully. Jul 7 05:54:52.659809 containerd[2140]: time="2025-07-07T05:54:52.659368927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:52.661560 containerd[2140]: time="2025-07-07T05:54:52.661490594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Jul 7 05:54:52.664013 containerd[2140]: time="2025-07-07T05:54:52.663941640Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:52.668728 containerd[2140]: time="2025-07-07T05:54:52.668662934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:52.670348 containerd[2140]: time="2025-07-07T05:54:52.670156247Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.510034949s" Jul 7 05:54:52.670348 containerd[2140]: time="2025-07-07T05:54:52.670210520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 7 05:54:52.674082 containerd[2140]: time="2025-07-07T05:54:52.673908834Z" level=info msg="CreateContainer within sandbox \"dc19851f3e20554271a0d718dd79df33927d8341776bd64cf326ddadc893b4b8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 05:54:52.705344 containerd[2140]: time="2025-07-07T05:54:52.705184460Z" level=info msg="CreateContainer within sandbox \"dc19851f3e20554271a0d718dd79df33927d8341776bd64cf326ddadc893b4b8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"84eea418e66bc82a3be927162d35e54545326d86513e64acabfb81018301f222\"" Jul 7 05:54:52.707002 containerd[2140]: time="2025-07-07T05:54:52.706576388Z" level=info msg="StartContainer for \"84eea418e66bc82a3be927162d35e54545326d86513e64acabfb81018301f222\"" Jul 7 05:54:52.819418 containerd[2140]: time="2025-07-07T05:54:52.819251742Z" level=info msg="StartContainer for \"84eea418e66bc82a3be927162d35e54545326d86513e64acabfb81018301f222\" returns successfully" Jul 7 05:54:52.844787 kubelet[2637]: E0707 05:54:52.843127 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:52.879479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84eea418e66bc82a3be927162d35e54545326d86513e64acabfb81018301f222-rootfs.mount: Deactivated successfully. Jul 7 05:54:53.068777 containerd[2140]: time="2025-07-07T05:54:53.068511890Z" level=info msg="shim disconnected" id=84eea418e66bc82a3be927162d35e54545326d86513e64acabfb81018301f222 namespace=k8s.io Jul 7 05:54:53.068777 containerd[2140]: time="2025-07-07T05:54:53.068589012Z" level=warning msg="cleaning up after shim disconnected" id=84eea418e66bc82a3be927162d35e54545326d86513e64acabfb81018301f222 namespace=k8s.io Jul 7 05:54:53.068777 containerd[2140]: time="2025-07-07T05:54:53.068617977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:54:53.844115 kubelet[2637]: E0707 05:54:53.844059 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:54.004491 kubelet[2637]: E0707 05:54:54.004019 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-npz5f" podUID="91c80da1-8133-4ef3-be15-3ede4b1f00b5" Jul 7 05:54:54.046163 containerd[2140]: time="2025-07-07T05:54:54.046111277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 05:54:54.845413 kubelet[2637]: E0707 05:54:54.845329 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:55.845685 kubelet[2637]: E0707 05:54:55.845483 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:56.006314 kubelet[2637]: E0707 05:54:56.005945 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-npz5f" podUID="91c80da1-8133-4ef3-be15-3ede4b1f00b5" Jul 7 05:54:56.846481 kubelet[2637]: E0707 05:54:56.846300 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:57.013081 containerd[2140]: time="2025-07-07T05:54:57.013001195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:57.015080 containerd[2140]: time="2025-07-07T05:54:57.014997380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 7 05:54:57.017370 containerd[2140]: time="2025-07-07T05:54:57.017292828Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:57.022684 containerd[2140]: time="2025-07-07T05:54:57.022584413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:57.025194 containerd[2140]: time="2025-07-07T05:54:57.024570511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.978393771s" Jul 7 05:54:57.025194 containerd[2140]: time="2025-07-07T05:54:57.024685330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 7 05:54:57.029533 containerd[2140]: time="2025-07-07T05:54:57.029225719Z" level=info msg="CreateContainer within sandbox \"dc19851f3e20554271a0d718dd79df33927d8341776bd64cf326ddadc893b4b8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 05:54:57.063633 containerd[2140]: time="2025-07-07T05:54:57.063567179Z" level=info msg="CreateContainer within sandbox \"dc19851f3e20554271a0d718dd79df33927d8341776bd64cf326ddadc893b4b8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fb5a6b61f93cd30aa501d7f1fcd00c90894e7bb92a19a7171f28bf9932d5971a\"" Jul 7 05:54:57.064814 containerd[2140]: time="2025-07-07T05:54:57.064550807Z" level=info msg="StartContainer for \"fb5a6b61f93cd30aa501d7f1fcd00c90894e7bb92a19a7171f28bf9932d5971a\"" Jul 7 05:54:57.169111 containerd[2140]: time="2025-07-07T05:54:57.168033816Z" level=info msg="StartContainer for \"fb5a6b61f93cd30aa501d7f1fcd00c90894e7bb92a19a7171f28bf9932d5971a\" returns successfully" Jul 7 05:54:57.847555 kubelet[2637]: E0707 05:54:57.847469 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:58.004357 kubelet[2637]: E0707 05:54:58.003055 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-npz5f" podUID="91c80da1-8133-4ef3-be15-3ede4b1f00b5" Jul 7 05:54:58.130604 containerd[2140]: time="2025-07-07T05:54:58.130427114Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:54:58.168666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb5a6b61f93cd30aa501d7f1fcd00c90894e7bb92a19a7171f28bf9932d5971a-rootfs.mount: Deactivated successfully. Jul 7 05:54:58.225902 kubelet[2637]: I0707 05:54:58.224931 2637 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 05:54:58.778192 containerd[2140]: time="2025-07-07T05:54:58.778106354Z" level=info msg="shim disconnected" id=fb5a6b61f93cd30aa501d7f1fcd00c90894e7bb92a19a7171f28bf9932d5971a namespace=k8s.io Jul 7 05:54:58.778192 containerd[2140]: time="2025-07-07T05:54:58.778179050Z" level=warning msg="cleaning up after shim disconnected" id=fb5a6b61f93cd30aa501d7f1fcd00c90894e7bb92a19a7171f28bf9932d5971a namespace=k8s.io Jul 7 05:54:58.778608 containerd[2140]: time="2025-07-07T05:54:58.778199140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:54:58.848092 kubelet[2637]: E0707 05:54:58.848016 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:59.071813 containerd[2140]: time="2025-07-07T05:54:59.071531217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 05:54:59.681498 kubelet[2637]: I0707 05:54:59.681381 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqfmb\" (UniqueName: \"kubernetes.io/projected/f3f438c7-a687-46f3-af1c-f1c6e6a0121e-kube-api-access-fqfmb\") pod \"nginx-deployment-8587fbcb89-5tlrj\" (UID: \"f3f438c7-a687-46f3-af1c-f1c6e6a0121e\") " pod="default/nginx-deployment-8587fbcb89-5tlrj" Jul 7 05:54:59.848566 kubelet[2637]: E0707 05:54:59.848499 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:54:59.964849 containerd[2140]: time="2025-07-07T05:54:59.964652760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-5tlrj,Uid:f3f438c7-a687-46f3-af1c-f1c6e6a0121e,Namespace:default,Attempt:0,}" Jul 7 05:55:00.012791 containerd[2140]: time="2025-07-07T05:55:00.011516712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-npz5f,Uid:91c80da1-8133-4ef3-be15-3ede4b1f00b5,Namespace:calico-system,Attempt:0,}" Jul 7 05:55:00.145668 containerd[2140]: time="2025-07-07T05:55:00.145605806Z" level=error msg="Failed to destroy network for sandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:55:00.146899 containerd[2140]: time="2025-07-07T05:55:00.146836823Z" level=error msg="encountered an error cleaning up failed sandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:55:00.147125 containerd[2140]: time="2025-07-07T05:55:00.147083743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-npz5f,Uid:91c80da1-8133-4ef3-be15-3ede4b1f00b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:55:00.147836 kubelet[2637]: E0707 05:55:00.147514 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:55:00.147964 kubelet[2637]: E0707 05:55:00.147908 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-npz5f" Jul 7 05:55:00.148051 kubelet[2637]: E0707 05:55:00.147968 2637 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-npz5f" Jul 7 05:55:00.149884 kubelet[2637]: E0707 05:55:00.148831 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-npz5f_calico-system(91c80da1-8133-4ef3-be15-3ede4b1f00b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-npz5f_calico-system(91c80da1-8133-4ef3-be15-3ede4b1f00b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-npz5f" podUID="91c80da1-8133-4ef3-be15-3ede4b1f00b5" Jul 7 05:55:00.150415 containerd[2140]: time="2025-07-07T05:55:00.150354795Z" level=error msg="Failed to destroy network for sandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:55:00.151914 containerd[2140]: time="2025-07-07T05:55:00.151833991Z" level=error msg="encountered an error cleaning up failed sandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:55:00.152048 containerd[2140]: time="2025-07-07T05:55:00.151940905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-5tlrj,Uid:f3f438c7-a687-46f3-af1c-f1c6e6a0121e,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:55:00.152439 kubelet[2637]: E0707 05:55:00.152374 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:55:00.152526 kubelet[2637]: E0707 05:55:00.152472 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-5tlrj" Jul 7 05:55:00.152526 kubelet[2637]: E0707 05:55:00.152512 2637 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-5tlrj" Jul 7 05:55:00.152653 kubelet[2637]: E0707 05:55:00.152585 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-5tlrj_default(f3f438c7-a687-46f3-af1c-f1c6e6a0121e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-5tlrj_default(f3f438c7-a687-46f3-af1c-f1c6e6a0121e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-5tlrj" podUID="f3f438c7-a687-46f3-af1c-f1c6e6a0121e" Jul 7 05:55:00.807898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848-shm.mount: Deactivated successfully. Jul 7 05:55:00.808171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc-shm.mount: Deactivated successfully. Jul 7 05:55:00.849225 kubelet[2637]: E0707 05:55:00.849177 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:01.077011 kubelet[2637]: I0707 05:55:01.075983 2637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:01.078768 containerd[2140]: time="2025-07-07T05:55:01.078542044Z" level=info msg="StopPodSandbox for \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\"" Jul 7 05:55:01.080824 containerd[2140]: time="2025-07-07T05:55:01.079650757Z" level=info msg="Ensure that sandbox 557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848 in task-service has been cleanup successfully" Jul 7 05:55:01.081165 kubelet[2637]: I0707 05:55:01.080156 2637 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:01.081632 containerd[2140]: time="2025-07-07T05:55:01.081558499Z" level=info msg="StopPodSandbox for \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\"" Jul 7 05:55:01.082296 containerd[2140]: time="2025-07-07T05:55:01.082244831Z" level=info msg="Ensure that sandbox b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc in task-service has been cleanup successfully" Jul 7 05:55:01.157388 containerd[2140]: time="2025-07-07T05:55:01.157214192Z" level=error msg="StopPodSandbox for \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\" failed" error="failed to destroy network for sandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:55:01.158772 kubelet[2637]: E0707 05:55:01.158187 2637 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:01.158772 kubelet[2637]: E0707 05:55:01.158298 2637 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc"} Jul 7 05:55:01.158772 kubelet[2637]: E0707 05:55:01.158378 2637 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f3f438c7-a687-46f3-af1c-f1c6e6a0121e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:55:01.158772 kubelet[2637]: E0707 05:55:01.158417 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f3f438c7-a687-46f3-af1c-f1c6e6a0121e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-5tlrj" podUID="f3f438c7-a687-46f3-af1c-f1c6e6a0121e" Jul 7 05:55:01.164194 containerd[2140]: time="2025-07-07T05:55:01.164096805Z" level=error msg="StopPodSandbox for \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\" failed" error="failed to destroy network for sandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 05:55:01.164566 kubelet[2637]: E0707 05:55:01.164507 2637 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:01.164657 kubelet[2637]: E0707 05:55:01.164582 2637 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848"} Jul 7 05:55:01.164657 kubelet[2637]: E0707 05:55:01.164638 2637 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91c80da1-8133-4ef3-be15-3ede4b1f00b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 05:55:01.164840 kubelet[2637]: E0707 05:55:01.164677 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91c80da1-8133-4ef3-be15-3ede4b1f00b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-npz5f" podUID="91c80da1-8133-4ef3-be15-3ede4b1f00b5" Jul 7 05:55:01.850716 kubelet[2637]: E0707 05:55:01.850635 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:02.851443 kubelet[2637]: E0707 05:55:02.851371 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:03.852490 kubelet[2637]: E0707 05:55:03.852390 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:04.853498 kubelet[2637]: E0707 05:55:04.853366 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:05.838252 kubelet[2637]: E0707 05:55:05.838193 2637 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:05.854167 kubelet[2637]: E0707 05:55:05.854019 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:06.855183 kubelet[2637]: E0707 05:55:06.854994 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:07.305581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2702388405.mount: Deactivated successfully. Jul 7 05:55:07.377793 containerd[2140]: time="2025-07-07T05:55:07.376830083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:07.378958 containerd[2140]: time="2025-07-07T05:55:07.378888661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 7 05:55:07.381429 containerd[2140]: time="2025-07-07T05:55:07.381350957Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:07.390277 containerd[2140]: time="2025-07-07T05:55:07.390189492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:07.394278 containerd[2140]: time="2025-07-07T05:55:07.394211680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 8.322586489s" Jul 7 05:55:07.394542 containerd[2140]: time="2025-07-07T05:55:07.394279566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 7 05:55:07.418289 containerd[2140]: time="2025-07-07T05:55:07.418227698Z" level=info msg="CreateContainer within sandbox \"dc19851f3e20554271a0d718dd79df33927d8341776bd64cf326ddadc893b4b8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 05:55:07.448798 containerd[2140]: time="2025-07-07T05:55:07.448656584Z" level=info msg="CreateContainer within sandbox \"dc19851f3e20554271a0d718dd79df33927d8341776bd64cf326ddadc893b4b8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4fc6b3d2563bf376721177b05832f8b745eb1dad2c9dfe439578114f6a1443d9\"" Jul 7 05:55:07.450127 containerd[2140]: time="2025-07-07T05:55:07.449316506Z" level=info msg="StartContainer for \"4fc6b3d2563bf376721177b05832f8b745eb1dad2c9dfe439578114f6a1443d9\"" Jul 7 05:55:07.550463 containerd[2140]: time="2025-07-07T05:55:07.550356061Z" level=info msg="StartContainer for \"4fc6b3d2563bf376721177b05832f8b745eb1dad2c9dfe439578114f6a1443d9\" returns successfully" Jul 7 05:55:07.792970 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 05:55:07.793095 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 05:55:07.855457 kubelet[2637]: E0707 05:55:07.855389 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:08.366159 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 05:55:08.856735 kubelet[2637]: E0707 05:55:08.856580 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:09.857885 kubelet[2637]: E0707 05:55:09.857791 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:09.885788 kernel: bpftool[3443]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 05:55:10.220034 (udev-worker)[3271]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:55:10.230644 systemd-networkd[1688]: vxlan.calico: Link UP Jul 7 05:55:10.230658 systemd-networkd[1688]: vxlan.calico: Gained carrier Jul 7 05:55:10.288866 (udev-worker)[3272]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:55:10.858335 kubelet[2637]: E0707 05:55:10.858254 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:11.859424 kubelet[2637]: E0707 05:55:11.859359 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:12.076159 systemd-networkd[1688]: vxlan.calico: Gained IPv6LL Jul 7 05:55:12.859878 kubelet[2637]: E0707 05:55:12.859789 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:13.860322 kubelet[2637]: E0707 05:55:13.860257 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:14.004422 containerd[2140]: time="2025-07-07T05:55:14.004216332Z" level=info msg="StopPodSandbox for \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\"" Jul 7 05:55:14.079676 kubelet[2637]: I0707 05:55:14.079577 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jxj7d" podStartSLOduration=10.021086981 podStartE2EDuration="28.079555504s" podCreationTimestamp="2025-07-07 05:54:46 +0000 UTC" firstStartedPulling="2025-07-07 05:54:49.336963007 +0000 UTC m=+4.560471318" lastFinishedPulling="2025-07-07 05:55:07.39543153 +0000 UTC m=+22.618939841" observedRunningTime="2025-07-07 05:55:08.158729656 +0000 UTC m=+23.382238063" watchObservedRunningTime="2025-07-07 05:55:14.079555504 +0000 UTC m=+29.303063815" Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.080 [INFO][3543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.081 [INFO][3543] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" iface="eth0" netns="/var/run/netns/cni-94bc741c-1e46-3907-536b-e712cf3bf0d4" Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.081 [INFO][3543] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" iface="eth0" netns="/var/run/netns/cni-94bc741c-1e46-3907-536b-e712cf3bf0d4" Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.081 [INFO][3543] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" iface="eth0" netns="/var/run/netns/cni-94bc741c-1e46-3907-536b-e712cf3bf0d4" Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.081 [INFO][3543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.081 [INFO][3543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.154 [INFO][3550] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" HandleID="k8s-pod-network.b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Workload="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.154 [INFO][3550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.154 [INFO][3550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.168 [WARNING][3550] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" HandleID="k8s-pod-network.b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Workload="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.168 [INFO][3550] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" HandleID="k8s-pod-network.b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Workload="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.170 [INFO][3550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:14.179411 containerd[2140]: 2025-07-07 05:55:14.176 [INFO][3543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:14.183234 containerd[2140]: time="2025-07-07T05:55:14.180043683Z" level=info msg="TearDown network for sandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\" successfully" Jul 7 05:55:14.183234 containerd[2140]: time="2025-07-07T05:55:14.180088624Z" level=info msg="StopPodSandbox for \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\" returns successfully" Jul 7 05:55:14.183234 containerd[2140]: time="2025-07-07T05:55:14.182631712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-5tlrj,Uid:f3f438c7-a687-46f3-af1c-f1c6e6a0121e,Namespace:default,Attempt:1,}" Jul 7 05:55:14.186277 systemd[1]: run-netns-cni\x2d94bc741c\x2d1e46\x2d3907\x2d536b\x2de712cf3bf0d4.mount: Deactivated successfully. Jul 7 05:55:14.389139 systemd-networkd[1688]: cali1cb6fbd2095: Link UP Jul 7 05:55:14.392958 systemd-networkd[1688]: cali1cb6fbd2095: Gained carrier Jul 7 05:55:14.398095 (udev-worker)[3576]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.279 [INFO][3557] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0 nginx-deployment-8587fbcb89- default f3f438c7-a687-46f3-af1c-f1c6e6a0121e 1248 0 2025-07-07 05:54:59 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.16.202 nginx-deployment-8587fbcb89-5tlrj eth0 default [] [] [kns.default ksa.default.default] cali1cb6fbd2095 [] [] }} ContainerID="3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" Namespace="default" Pod="nginx-deployment-8587fbcb89-5tlrj" WorkloadEndpoint="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-" Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.279 [INFO][3557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" Namespace="default" Pod="nginx-deployment-8587fbcb89-5tlrj" WorkloadEndpoint="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.323 [INFO][3569] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" HandleID="k8s-pod-network.3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" Workload="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.323 [INFO][3569] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" HandleID="k8s-pod-network.3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" Workload="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273020), Attrs:map[string]string{"namespace":"default", "node":"172.31.16.202", "pod":"nginx-deployment-8587fbcb89-5tlrj", "timestamp":"2025-07-07 05:55:14.323405207 +0000 UTC"}, Hostname:"172.31.16.202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.323 [INFO][3569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.323 [INFO][3569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.323 [INFO][3569] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.202' Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.339 [INFO][3569] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" host="172.31.16.202" Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.347 [INFO][3569] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.16.202" Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.353 [INFO][3569] ipam/ipam.go 511: Trying affinity for 192.168.63.192/26 host="172.31.16.202" Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.356 [INFO][3569] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.192/26 host="172.31.16.202" Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.360 [INFO][3569] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.192/26 host="172.31.16.202" Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.360 [INFO][3569] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.63.192/26 handle="k8s-pod-network.3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" host="172.31.16.202" Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.362 [INFO][3569] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175 Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.371 [INFO][3569] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.63.192/26 handle="k8s-pod-network.3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" host="172.31.16.202" Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.379 [INFO][3569] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.63.193/26] block=192.168.63.192/26 handle="k8s-pod-network.3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" host="172.31.16.202" Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.379 [INFO][3569] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.193/26] handle="k8s-pod-network.3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" host="172.31.16.202" Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.379 [INFO][3569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:14.411364 containerd[2140]: 2025-07-07 05:55:14.379 [INFO][3569] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.193/26] IPv6=[] ContainerID="3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" HandleID="k8s-pod-network.3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" Workload="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:14.412471 containerd[2140]: 2025-07-07 05:55:14.382 [INFO][3557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" Namespace="default" Pod="nginx-deployment-8587fbcb89-5tlrj" WorkloadEndpoint="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"f3f438c7-a687-46f3-af1c-f1c6e6a0121e", ResourceVersion:"1248", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.202", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-5tlrj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1cb6fbd2095", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:14.412471 containerd[2140]: 2025-07-07 05:55:14.383 [INFO][3557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.193/32] ContainerID="3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" Namespace="default" Pod="nginx-deployment-8587fbcb89-5tlrj" WorkloadEndpoint="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:14.412471 containerd[2140]: 2025-07-07 05:55:14.383 [INFO][3557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cb6fbd2095 ContainerID="3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" Namespace="default" Pod="nginx-deployment-8587fbcb89-5tlrj" WorkloadEndpoint="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:14.412471 containerd[2140]: 2025-07-07 05:55:14.392 [INFO][3557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" Namespace="default" Pod="nginx-deployment-8587fbcb89-5tlrj" WorkloadEndpoint="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:14.412471 containerd[2140]: 2025-07-07 05:55:14.392 [INFO][3557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" Namespace="default" Pod="nginx-deployment-8587fbcb89-5tlrj" WorkloadEndpoint="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"f3f438c7-a687-46f3-af1c-f1c6e6a0121e", ResourceVersion:"1248", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.202", ContainerID:"3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175", Pod:"nginx-deployment-8587fbcb89-5tlrj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1cb6fbd2095", MAC:"d2:90:24:10:a7:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:14.412471 containerd[2140]: 2025-07-07 05:55:14.405 [INFO][3557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175" Namespace="default" Pod="nginx-deployment-8587fbcb89-5tlrj" WorkloadEndpoint="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:14.460559 containerd[2140]: time="2025-07-07T05:55:14.460126264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:55:14.460559 containerd[2140]: time="2025-07-07T05:55:14.460243206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:55:14.460559 containerd[2140]: time="2025-07-07T05:55:14.460297155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:14.462980 containerd[2140]: time="2025-07-07T05:55:14.461213448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:14.573051 containerd[2140]: time="2025-07-07T05:55:14.572958128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-5tlrj,Uid:f3f438c7-a687-46f3-af1c-f1c6e6a0121e,Namespace:default,Attempt:1,} returns sandbox id \"3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175\"" Jul 7 05:55:14.577398 containerd[2140]: time="2025-07-07T05:55:14.577228244Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 7 05:55:14.861906 kubelet[2637]: E0707 05:55:14.860860 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:15.003364 containerd[2140]: time="2025-07-07T05:55:15.003289644Z" level=info msg="StopPodSandbox for \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\"" Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.084 [INFO][3641] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.085 [INFO][3641] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" iface="eth0" netns="/var/run/netns/cni-18bb35bd-b8f5-ebd7-0d23-7109f6552940" Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.086 [INFO][3641] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" iface="eth0" netns="/var/run/netns/cni-18bb35bd-b8f5-ebd7-0d23-7109f6552940" Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.086 [INFO][3641] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" iface="eth0" netns="/var/run/netns/cni-18bb35bd-b8f5-ebd7-0d23-7109f6552940" Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.086 [INFO][3641] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.086 [INFO][3641] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.129 [INFO][3648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" HandleID="k8s-pod-network.557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Workload="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.130 [INFO][3648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.130 [INFO][3648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.142 [WARNING][3648] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" HandleID="k8s-pod-network.557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Workload="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.142 [INFO][3648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" HandleID="k8s-pod-network.557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Workload="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.145 [INFO][3648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:15.150214 containerd[2140]: 2025-07-07 05:55:15.147 [INFO][3641] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:15.152560 containerd[2140]: time="2025-07-07T05:55:15.150363827Z" level=info msg="TearDown network for sandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\" successfully" Jul 7 05:55:15.152560 containerd[2140]: time="2025-07-07T05:55:15.150403407Z" level=info msg="StopPodSandbox for \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\" returns successfully" Jul 7 05:55:15.152560 containerd[2140]: time="2025-07-07T05:55:15.152090999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-npz5f,Uid:91c80da1-8133-4ef3-be15-3ede4b1f00b5,Namespace:calico-system,Attempt:1,}" Jul 7 05:55:15.190330 systemd[1]: run-containerd-runc-k8s.io-3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175-runc.MxswGT.mount: Deactivated successfully. Jul 7 05:55:15.191144 systemd[1]: run-netns-cni\x2d18bb35bd\x2db8f5\x2debd7\x2d0d23\x2d7109f6552940.mount: Deactivated successfully. Jul 7 05:55:15.353118 systemd-networkd[1688]: calie7c5b7a0967: Link UP Jul 7 05:55:15.357300 systemd-networkd[1688]: calie7c5b7a0967: Gained carrier Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.243 [INFO][3655] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.202-k8s-csi--node--driver--npz5f-eth0 csi-node-driver- calico-system 91c80da1-8133-4ef3-be15-3ede4b1f00b5 1256 0 2025-07-07 05:54:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.16.202 csi-node-driver-npz5f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie7c5b7a0967 [] [] }} ContainerID="5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" Namespace="calico-system" Pod="csi-node-driver-npz5f" WorkloadEndpoint="172.31.16.202-k8s-csi--node--driver--npz5f-" Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.244 [INFO][3655] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" Namespace="calico-system" Pod="csi-node-driver-npz5f" WorkloadEndpoint="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.286 [INFO][3668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" HandleID="k8s-pod-network.5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" Workload="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.286 [INFO][3668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" HandleID="k8s-pod-network.5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" Workload="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af90), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.16.202", "pod":"csi-node-driver-npz5f", "timestamp":"2025-07-07 05:55:15.286143032 +0000 UTC"}, Hostname:"172.31.16.202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.286 [INFO][3668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.286 [INFO][3668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.286 [INFO][3668] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.202' Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.299 [INFO][3668] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" host="172.31.16.202" Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.307 [INFO][3668] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.16.202" Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.315 [INFO][3668] ipam/ipam.go 511: Trying affinity for 192.168.63.192/26 host="172.31.16.202" Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.319 [INFO][3668] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.192/26 host="172.31.16.202" Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.324 [INFO][3668] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.192/26 host="172.31.16.202" Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.324 [INFO][3668] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.63.192/26 handle="k8s-pod-network.5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" host="172.31.16.202" Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.327 [INFO][3668] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.332 [INFO][3668] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.63.192/26 handle="k8s-pod-network.5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" host="172.31.16.202" Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.342 [INFO][3668] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.63.194/26] block=192.168.63.192/26 handle="k8s-pod-network.5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" host="172.31.16.202" Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.343 [INFO][3668] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.194/26] handle="k8s-pod-network.5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" host="172.31.16.202" Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.343 [INFO][3668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:15.383547 containerd[2140]: 2025-07-07 05:55:15.343 [INFO][3668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.194/26] IPv6=[] ContainerID="5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" HandleID="k8s-pod-network.5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" Workload="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:15.385872 containerd[2140]: 2025-07-07 05:55:15.346 [INFO][3655] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" Namespace="calico-system" Pod="csi-node-driver-npz5f" WorkloadEndpoint="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.202-k8s-csi--node--driver--npz5f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91c80da1-8133-4ef3-be15-3ede4b1f00b5", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.202", ContainerID:"", Pod:"csi-node-driver-npz5f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie7c5b7a0967", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:15.385872 containerd[2140]: 2025-07-07 05:55:15.347 [INFO][3655] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.194/32] ContainerID="5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" Namespace="calico-system" Pod="csi-node-driver-npz5f" WorkloadEndpoint="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:15.385872 containerd[2140]: 2025-07-07 05:55:15.347 [INFO][3655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7c5b7a0967 ContainerID="5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" Namespace="calico-system" Pod="csi-node-driver-npz5f" WorkloadEndpoint="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:15.385872 containerd[2140]: 2025-07-07 05:55:15.357 [INFO][3655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" Namespace="calico-system" Pod="csi-node-driver-npz5f" WorkloadEndpoint="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:15.385872 containerd[2140]: 2025-07-07 05:55:15.360 [INFO][3655] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" Namespace="calico-system" Pod="csi-node-driver-npz5f" WorkloadEndpoint="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.202-k8s-csi--node--driver--npz5f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91c80da1-8133-4ef3-be15-3ede4b1f00b5", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.202", ContainerID:"5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa", Pod:"csi-node-driver-npz5f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie7c5b7a0967", MAC:"fe:21:64:e6:21:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:15.385872 containerd[2140]: 2025-07-07 05:55:15.377 [INFO][3655] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa" Namespace="calico-system" Pod="csi-node-driver-npz5f" WorkloadEndpoint="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:15.425111 containerd[2140]: time="2025-07-07T05:55:15.423591094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:55:15.425111 containerd[2140]: time="2025-07-07T05:55:15.423722800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:55:15.425111 containerd[2140]: time="2025-07-07T05:55:15.423803507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:15.425111 containerd[2140]: time="2025-07-07T05:55:15.424944904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:15.530983 containerd[2140]: time="2025-07-07T05:55:15.530888339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-npz5f,Uid:91c80da1-8133-4ef3-be15-3ede4b1f00b5,Namespace:calico-system,Attempt:1,} returns sandbox id \"5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa\"" Jul 7 05:55:15.660619 systemd-networkd[1688]: cali1cb6fbd2095: Gained IPv6LL Jul 7 05:55:15.863221 kubelet[2637]: E0707 05:55:15.861355 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:16.861960 kubelet[2637]: E0707 05:55:16.861805 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:17.390342 systemd-networkd[1688]: calie7c5b7a0967: Gained IPv6LL Jul 7 05:55:17.862445 kubelet[2637]: E0707 05:55:17.862292 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:18.036035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4269257629.mount: Deactivated successfully. Jul 7 05:55:18.863339 kubelet[2637]: E0707 05:55:18.863270 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:19.599164 ntpd[2095]: Listen normally on 6 vxlan.calico 192.168.63.192:123 Jul 7 05:55:19.600897 ntpd[2095]: 7 Jul 05:55:19 ntpd[2095]: Listen normally on 6 vxlan.calico 192.168.63.192:123 Jul 7 05:55:19.600897 ntpd[2095]: 7 Jul 05:55:19 ntpd[2095]: Listen normally on 7 vxlan.calico [fe80::6428:91ff:febc:baa8%3]:123 Jul 7 05:55:19.600897 ntpd[2095]: 7 Jul 05:55:19 ntpd[2095]: Listen normally on 8 cali1cb6fbd2095 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 7 05:55:19.600897 ntpd[2095]: 7 Jul 05:55:19 ntpd[2095]: Listen normally on 9 calie7c5b7a0967 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 7 05:55:19.599300 ntpd[2095]: Listen normally on 7 vxlan.calico [fe80::6428:91ff:febc:baa8%3]:123 Jul 7 05:55:19.601409 containerd[2140]: time="2025-07-07T05:55:19.601026727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:19.599386 ntpd[2095]: Listen normally on 8 cali1cb6fbd2095 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 7 05:55:19.599454 ntpd[2095]: Listen normally on 9 calie7c5b7a0967 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 7 05:55:19.603365 containerd[2140]: time="2025-07-07T05:55:19.603301030Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69950600" Jul 7 05:55:19.604069 containerd[2140]: time="2025-07-07T05:55:19.603997605Z" level=info msg="ImageCreate event name:\"sha256:e55a872cbf1b1d996b1d5333796fbe6ec0b825868f3ad30b387fc65697ed40dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:19.611128 containerd[2140]: time="2025-07-07T05:55:19.611027792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:9a18b8b1845bf88a3388cde4ec626461965a717ac641198120979e75438b9693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:19.614015 containerd[2140]: time="2025-07-07T05:55:19.613471834Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e55a872cbf1b1d996b1d5333796fbe6ec0b825868f3ad30b387fc65697ed40dd\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:9a18b8b1845bf88a3388cde4ec626461965a717ac641198120979e75438b9693\", size \"69950478\" in 5.036178918s" Jul 7 05:55:19.614015 containerd[2140]: time="2025-07-07T05:55:19.613537777Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e55a872cbf1b1d996b1d5333796fbe6ec0b825868f3ad30b387fc65697ed40dd\"" Jul 7 05:55:19.617063 containerd[2140]: time="2025-07-07T05:55:19.616996235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 05:55:19.619124 containerd[2140]: time="2025-07-07T05:55:19.618888456Z" level=info msg="CreateContainer within sandbox \"3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 7 05:55:19.641604 containerd[2140]: time="2025-07-07T05:55:19.641511743Z" level=info msg="CreateContainer within sandbox \"3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7d70b1e4934751a40c51f347fe1289293f8c0203a65e0ea2ac1839dfe5403102\"" Jul 7 05:55:19.652541 containerd[2140]: time="2025-07-07T05:55:19.652470709Z" level=info msg="StartContainer for \"7d70b1e4934751a40c51f347fe1289293f8c0203a65e0ea2ac1839dfe5403102\"" Jul 7 05:55:19.756914 containerd[2140]: time="2025-07-07T05:55:19.756636260Z" level=info msg="StartContainer for \"7d70b1e4934751a40c51f347fe1289293f8c0203a65e0ea2ac1839dfe5403102\" returns successfully" Jul 7 05:55:19.864600 kubelet[2637]: E0707 05:55:19.863613 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:20.863813 kubelet[2637]: E0707 05:55:20.863733 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:20.890054 containerd[2140]: time="2025-07-07T05:55:20.889988268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:20.891543 containerd[2140]: time="2025-07-07T05:55:20.891490061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 7 05:55:20.892333 containerd[2140]: time="2025-07-07T05:55:20.892247242Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:20.895932 containerd[2140]: time="2025-07-07T05:55:20.895802132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:20.897581 containerd[2140]: time="2025-07-07T05:55:20.897372842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.280311948s" Jul 7 05:55:20.897581 containerd[2140]: time="2025-07-07T05:55:20.897427583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 7 05:55:20.901237 containerd[2140]: time="2025-07-07T05:55:20.901188157Z" level=info msg="CreateContainer within sandbox \"5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 05:55:20.923069 containerd[2140]: time="2025-07-07T05:55:20.921710035Z" level=info msg="CreateContainer within sandbox \"5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c40db8bc09ba16431264fac07c718611629536beab981470b2a79c99f4d448b3\"" Jul 7 05:55:20.924805 containerd[2140]: time="2025-07-07T05:55:20.923494803Z" level=info msg="StartContainer for \"c40db8bc09ba16431264fac07c718611629536beab981470b2a79c99f4d448b3\"" Jul 7 05:55:21.028410 containerd[2140]: time="2025-07-07T05:55:21.028290075Z" level=info msg="StartContainer for \"c40db8bc09ba16431264fac07c718611629536beab981470b2a79c99f4d448b3\" returns successfully" Jul 7 05:55:21.032197 containerd[2140]: time="2025-07-07T05:55:21.032127927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 05:55:21.864968 kubelet[2637]: E0707 05:55:21.864894 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:22.412435 containerd[2140]: time="2025-07-07T05:55:22.412340600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:22.414343 containerd[2140]: time="2025-07-07T05:55:22.414247910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 7 05:55:22.416809 containerd[2140]: time="2025-07-07T05:55:22.416697972Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:22.423925 containerd[2140]: time="2025-07-07T05:55:22.423834846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:22.426058 containerd[2140]: time="2025-07-07T05:55:22.425783883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.393286444s" Jul 7 05:55:22.426058 containerd[2140]: time="2025-07-07T05:55:22.425859145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 7 05:55:22.436014 containerd[2140]: time="2025-07-07T05:55:22.434499275Z" level=info msg="CreateContainer within sandbox \"5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 05:55:22.465812 containerd[2140]: time="2025-07-07T05:55:22.465710302Z" level=info msg="CreateContainer within sandbox \"5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e4351fb132e67c820cc05bbc232ab0176bde40d541285c4efb197ee0448f1a20\"" Jul 7 05:55:22.466793 containerd[2140]: time="2025-07-07T05:55:22.466635783Z" level=info msg="StartContainer for \"e4351fb132e67c820cc05bbc232ab0176bde40d541285c4efb197ee0448f1a20\"" Jul 7 05:55:22.572417 containerd[2140]: time="2025-07-07T05:55:22.572239007Z" level=info msg="StartContainer for \"e4351fb132e67c820cc05bbc232ab0176bde40d541285c4efb197ee0448f1a20\" returns successfully" Jul 7 05:55:22.578986 update_engine[2113]: I20250707 05:55:22.578905 2113 update_attempter.cc:509] Updating boot flags... Jul 7 05:55:22.678929 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3914) Jul 7 05:55:22.865503 kubelet[2637]: E0707 05:55:22.865424 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:22.956778 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3914) Jul 7 05:55:23.012092 kubelet[2637]: I0707 05:55:23.012051 2637 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 05:55:23.012372 kubelet[2637]: I0707 05:55:23.012328 2637 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 05:55:23.206660 kubelet[2637]: I0707 05:55:23.206300 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-npz5f" podStartSLOduration=30.311427899999998 podStartE2EDuration="37.206276308s" podCreationTimestamp="2025-07-07 05:54:46 +0000 UTC" firstStartedPulling="2025-07-07 05:55:15.53351542 +0000 UTC m=+30.757023731" lastFinishedPulling="2025-07-07 05:55:22.42836384 +0000 UTC m=+37.651872139" observedRunningTime="2025-07-07 05:55:23.206121261 +0000 UTC m=+38.429629608" watchObservedRunningTime="2025-07-07 05:55:23.206276308 +0000 UTC m=+38.429784619" Jul 7 05:55:23.206660 kubelet[2637]: I0707 05:55:23.206597 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-5tlrj" podStartSLOduration=19.166527663 podStartE2EDuration="24.206583954s" podCreationTimestamp="2025-07-07 05:54:59 +0000 UTC" firstStartedPulling="2025-07-07 05:55:14.575886847 +0000 UTC m=+29.799395146" lastFinishedPulling="2025-07-07 05:55:19.615943138 +0000 UTC m=+34.839451437" observedRunningTime="2025-07-07 05:55:20.165400165 +0000 UTC m=+35.388908512" watchObservedRunningTime="2025-07-07 05:55:23.206583954 +0000 UTC m=+38.430092253" Jul 7 05:55:23.237599 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3914) Jul 7 05:55:23.865937 kubelet[2637]: E0707 05:55:23.865858 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:24.866850 kubelet[2637]: E0707 05:55:24.866727 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:25.838935 kubelet[2637]: E0707 05:55:25.838873 2637 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:25.867603 kubelet[2637]: E0707 05:55:25.867528 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:26.868707 kubelet[2637]: E0707 05:55:26.868650 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:27.869331 kubelet[2637]: E0707 05:55:27.869259 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:28.164098 kubelet[2637]: I0707 05:55:28.163701 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b8a6447c-c837-4f3f-871e-3855f7c1b403-data\") pod \"nfs-server-provisioner-0\" (UID: \"b8a6447c-c837-4f3f-871e-3855f7c1b403\") " pod="default/nfs-server-provisioner-0" Jul 7 05:55:28.164098 kubelet[2637]: I0707 05:55:28.163808 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9ht\" (UniqueName: \"kubernetes.io/projected/b8a6447c-c837-4f3f-871e-3855f7c1b403-kube-api-access-bj9ht\") pod \"nfs-server-provisioner-0\" (UID: \"b8a6447c-c837-4f3f-871e-3855f7c1b403\") " pod="default/nfs-server-provisioner-0" Jul 7 05:55:28.408706 containerd[2140]: time="2025-07-07T05:55:28.408588977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b8a6447c-c837-4f3f-871e-3855f7c1b403,Namespace:default,Attempt:0,}" Jul 7 05:55:28.647344 systemd-networkd[1688]: cali60e51b789ff: Link UP Jul 7 05:55:28.648935 systemd-networkd[1688]: cali60e51b789ff: Gained carrier Jul 7 05:55:28.653877 (udev-worker)[4193]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.500 [INFO][4175] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.202-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default b8a6447c-c837-4f3f-871e-3855f7c1b403 1322 0 2025-07-07 05:55:27 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.16.202 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.202-k8s-nfs--server--provisioner--0-" Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.501 [INFO][4175] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.202-k8s-nfs--server--provisioner--0-eth0" Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.551 [INFO][4186] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" HandleID="k8s-pod-network.d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" Workload="172.31.16.202-k8s-nfs--server--provisioner--0-eth0" Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.551 [INFO][4186] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" HandleID="k8s-pod-network.d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" Workload="172.31.16.202-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb640), Attrs:map[string]string{"namespace":"default", "node":"172.31.16.202", "pod":"nfs-server-provisioner-0", "timestamp":"2025-07-07 05:55:28.551485218 +0000 UTC"}, Hostname:"172.31.16.202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.551 [INFO][4186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.551 [INFO][4186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.552 [INFO][4186] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.202' Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.569 [INFO][4186] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" host="172.31.16.202" Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.576 [INFO][4186] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.16.202" Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.584 [INFO][4186] ipam/ipam.go 511: Trying affinity for 192.168.63.192/26 host="172.31.16.202" Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.587 [INFO][4186] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.192/26 host="172.31.16.202" Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.595 [INFO][4186] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.192/26 host="172.31.16.202" Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.595 [INFO][4186] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.63.192/26 handle="k8s-pod-network.d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" host="172.31.16.202" Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.602 [INFO][4186] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.616 [INFO][4186] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.63.192/26 handle="k8s-pod-network.d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" host="172.31.16.202" Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.636 [INFO][4186] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.63.195/26] block=192.168.63.192/26 handle="k8s-pod-network.d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" host="172.31.16.202" Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.636 [INFO][4186] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.195/26] handle="k8s-pod-network.d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" host="172.31.16.202" Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.636 [INFO][4186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:28.676060 containerd[2140]: 2025-07-07 05:55:28.636 [INFO][4186] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.195/26] IPv6=[] ContainerID="d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" HandleID="k8s-pod-network.d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" Workload="172.31.16.202-k8s-nfs--server--provisioner--0-eth0" Jul 7 05:55:28.685331 containerd[2140]: 2025-07-07 05:55:28.639 [INFO][4175] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.202-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.202-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b8a6447c-c837-4f3f-871e-3855f7c1b403", ResourceVersion:"1322", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.202", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.63.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:28.685331 containerd[2140]: 2025-07-07 05:55:28.640 [INFO][4175] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.195/32] ContainerID="d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.202-k8s-nfs--server--provisioner--0-eth0" Jul 7 05:55:28.685331 containerd[2140]: 2025-07-07 05:55:28.640 [INFO][4175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.202-k8s-nfs--server--provisioner--0-eth0" Jul 7 05:55:28.685331 containerd[2140]: 2025-07-07 05:55:28.646 [INFO][4175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.202-k8s-nfs--server--provisioner--0-eth0" Jul 7 05:55:28.685711 containerd[2140]: 2025-07-07 05:55:28.647 [INFO][4175] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.202-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.202-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b8a6447c-c837-4f3f-871e-3855f7c1b403", ResourceVersion:"1322", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.202", ContainerID:"d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.63.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"02:9f:f9:4a:3a:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:28.685711 containerd[2140]: 2025-07-07 05:55:28.671 [INFO][4175] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.202-k8s-nfs--server--provisioner--0-eth0" Jul 7 05:55:28.726894 containerd[2140]: time="2025-07-07T05:55:28.726380395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:55:28.726894 containerd[2140]: time="2025-07-07T05:55:28.726465169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:55:28.726894 containerd[2140]: time="2025-07-07T05:55:28.726490344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:28.727423 containerd[2140]: time="2025-07-07T05:55:28.726857169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:28.821462 containerd[2140]: time="2025-07-07T05:55:28.821394621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b8a6447c-c837-4f3f-871e-3855f7c1b403,Namespace:default,Attempt:0,} returns sandbox id \"d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc\"" Jul 7 05:55:28.824874 containerd[2140]: time="2025-07-07T05:55:28.824816725Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 7 05:55:28.870296 kubelet[2637]: E0707 05:55:28.870228 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:29.870917 kubelet[2637]: E0707 05:55:29.870840 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:30.380109 systemd-networkd[1688]: cali60e51b789ff: Gained IPv6LL Jul 7 05:55:30.871416 kubelet[2637]: E0707 05:55:30.871367 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:31.761176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount635493184.mount: Deactivated successfully. Jul 7 05:55:31.872990 kubelet[2637]: E0707 05:55:31.872923 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:32.599198 ntpd[2095]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jul 7 05:55:32.601417 ntpd[2095]: 7 Jul 05:55:32 ntpd[2095]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jul 7 05:55:32.873734 kubelet[2637]: E0707 05:55:32.873573 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:33.874020 kubelet[2637]: E0707 05:55:33.873866 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:34.874660 kubelet[2637]: E0707 05:55:34.874603 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:35.022057 containerd[2140]: time="2025-07-07T05:55:35.021961844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:35.024421 containerd[2140]: time="2025-07-07T05:55:35.024295889Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jul 7 05:55:35.026689 containerd[2140]: time="2025-07-07T05:55:35.026548602Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:35.035444 containerd[2140]: time="2025-07-07T05:55:35.035311142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:55:35.038206 containerd[2140]: time="2025-07-07T05:55:35.037951346Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 6.212629266s" Jul 7 05:55:35.038206 containerd[2140]: time="2025-07-07T05:55:35.038029942Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 7 05:55:35.043016 containerd[2140]: time="2025-07-07T05:55:35.042938955Z" level=info msg="CreateContainer within sandbox \"d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 7 05:55:35.075637 containerd[2140]: time="2025-07-07T05:55:35.075487636Z" level=info msg="CreateContainer within sandbox \"d8e2f34a6b7efb2e803be79e7564de01bd3cf56a934eaff9af2767a4fcfa9ddc\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"99d53a52633a4268332fbf8e14aea7c5ea0c44f6a9486b0cba27239a8603c0c9\"" Jul 7 05:55:35.076786 containerd[2140]: time="2025-07-07T05:55:35.076422389Z" level=info msg="StartContainer for \"99d53a52633a4268332fbf8e14aea7c5ea0c44f6a9486b0cba27239a8603c0c9\"" Jul 7 05:55:35.176833 containerd[2140]: time="2025-07-07T05:55:35.175813561Z" level=info msg="StartContainer for \"99d53a52633a4268332fbf8e14aea7c5ea0c44f6a9486b0cba27239a8603c0c9\" returns successfully" Jul 7 05:55:35.875631 kubelet[2637]: E0707 05:55:35.875555 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:36.876326 kubelet[2637]: E0707 05:55:36.876264 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:37.877254 kubelet[2637]: E0707 05:55:37.877180 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:38.878379 kubelet[2637]: E0707 05:55:38.878309 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:39.878526 kubelet[2637]: E0707 05:55:39.878446 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:40.879605 kubelet[2637]: E0707 05:55:40.879500 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:41.880768 kubelet[2637]: E0707 05:55:41.880673 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:42.881121 kubelet[2637]: E0707 05:55:42.881052 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:43.881779 kubelet[2637]: E0707 05:55:43.881687 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:44.882689 kubelet[2637]: E0707 05:55:44.882616 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:45.838962 kubelet[2637]: E0707 05:55:45.838890 2637 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:45.883302 kubelet[2637]: E0707 05:55:45.883227 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:45.893337 containerd[2140]: time="2025-07-07T05:55:45.893161582Z" level=info msg="StopPodSandbox for \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\"" Jul 7 05:55:46.025089 containerd[2140]: 2025-07-07 05:55:45.956 [WARNING][4388] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"f3f438c7-a687-46f3-af1c-f1c6e6a0121e", ResourceVersion:"1275", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.202", ContainerID:"3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175", Pod:"nginx-deployment-8587fbcb89-5tlrj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1cb6fbd2095", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:46.025089 containerd[2140]: 2025-07-07 05:55:45.957 [INFO][4388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:46.025089 containerd[2140]: 2025-07-07 05:55:45.957 [INFO][4388] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" iface="eth0" netns="" Jul 7 05:55:46.025089 containerd[2140]: 2025-07-07 05:55:45.957 [INFO][4388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:46.025089 containerd[2140]: 2025-07-07 05:55:45.957 [INFO][4388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:46.025089 containerd[2140]: 2025-07-07 05:55:45.999 [INFO][4395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" HandleID="k8s-pod-network.b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Workload="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:46.025089 containerd[2140]: 2025-07-07 05:55:45.999 [INFO][4395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:46.025089 containerd[2140]: 2025-07-07 05:55:45.999 [INFO][4395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:46.025089 containerd[2140]: 2025-07-07 05:55:46.017 [WARNING][4395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" HandleID="k8s-pod-network.b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Workload="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:46.025089 containerd[2140]: 2025-07-07 05:55:46.018 [INFO][4395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" HandleID="k8s-pod-network.b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Workload="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:46.025089 containerd[2140]: 2025-07-07 05:55:46.020 [INFO][4395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:46.025089 containerd[2140]: 2025-07-07 05:55:46.022 [INFO][4388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:46.026639 containerd[2140]: time="2025-07-07T05:55:46.025127786Z" level=info msg="TearDown network for sandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\" successfully" Jul 7 05:55:46.026639 containerd[2140]: time="2025-07-07T05:55:46.025174275Z" level=info msg="StopPodSandbox for \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\" returns successfully" Jul 7 05:55:46.026986 containerd[2140]: time="2025-07-07T05:55:46.026921825Z" level=info msg="RemovePodSandbox for \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\"" Jul 7 05:55:46.027070 containerd[2140]: time="2025-07-07T05:55:46.027003480Z" level=info msg="Forcibly stopping sandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\"" Jul 7 05:55:46.172635 containerd[2140]: 2025-07-07 05:55:46.090 [WARNING][4411] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"f3f438c7-a687-46f3-af1c-f1c6e6a0121e", ResourceVersion:"1275", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.202", ContainerID:"3530e86f1e6050d9fd6324bf44abde0e00a232098cbfcef3c5fb100292471175", Pod:"nginx-deployment-8587fbcb89-5tlrj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1cb6fbd2095", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:46.172635 containerd[2140]: 2025-07-07 05:55:46.091 [INFO][4411] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:46.172635 containerd[2140]: 2025-07-07 05:55:46.091 [INFO][4411] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" iface="eth0" netns="" Jul 7 05:55:46.172635 containerd[2140]: 2025-07-07 05:55:46.091 [INFO][4411] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:46.172635 containerd[2140]: 2025-07-07 05:55:46.091 [INFO][4411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:46.172635 containerd[2140]: 2025-07-07 05:55:46.128 [INFO][4418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" HandleID="k8s-pod-network.b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Workload="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:46.172635 containerd[2140]: 2025-07-07 05:55:46.128 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:46.172635 containerd[2140]: 2025-07-07 05:55:46.128 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:46.172635 containerd[2140]: 2025-07-07 05:55:46.162 [WARNING][4418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" HandleID="k8s-pod-network.b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Workload="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:46.172635 containerd[2140]: 2025-07-07 05:55:46.162 [INFO][4418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" HandleID="k8s-pod-network.b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Workload="172.31.16.202-k8s-nginx--deployment--8587fbcb89--5tlrj-eth0" Jul 7 05:55:46.172635 containerd[2140]: 2025-07-07 05:55:46.166 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:46.172635 containerd[2140]: 2025-07-07 05:55:46.168 [INFO][4411] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc" Jul 7 05:55:46.172635 containerd[2140]: time="2025-07-07T05:55:46.171411108Z" level=info msg="TearDown network for sandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\" successfully" Jul 7 05:55:46.178121 containerd[2140]: time="2025-07-07T05:55:46.177985367Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:46.178299 containerd[2140]: time="2025-07-07T05:55:46.178144720Z" level=info msg="RemovePodSandbox \"b7a1a9aba771b0bc3ffb23334d1598d85c604f8140bd9232e472550f5e5600cc\" returns successfully" Jul 7 05:55:46.179158 containerd[2140]: time="2025-07-07T05:55:46.179034063Z" level=info msg="StopPodSandbox for \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\"" Jul 7 05:55:46.330567 containerd[2140]: 2025-07-07 05:55:46.271 [WARNING][4432] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.202-k8s-csi--node--driver--npz5f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91c80da1-8133-4ef3-be15-3ede4b1f00b5", ResourceVersion:"1292", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.202", ContainerID:"5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa", Pod:"csi-node-driver-npz5f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie7c5b7a0967", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:46.330567 containerd[2140]: 2025-07-07 05:55:46.271 [INFO][4432] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:46.330567 containerd[2140]: 2025-07-07 05:55:46.271 [INFO][4432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" iface="eth0" netns="" Jul 7 05:55:46.330567 containerd[2140]: 2025-07-07 05:55:46.271 [INFO][4432] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:46.330567 containerd[2140]: 2025-07-07 05:55:46.271 [INFO][4432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:46.330567 containerd[2140]: 2025-07-07 05:55:46.309 [INFO][4440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" HandleID="k8s-pod-network.557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Workload="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:46.330567 containerd[2140]: 2025-07-07 05:55:46.309 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:46.330567 containerd[2140]: 2025-07-07 05:55:46.310 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:46.330567 containerd[2140]: 2025-07-07 05:55:46.322 [WARNING][4440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" HandleID="k8s-pod-network.557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Workload="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:46.330567 containerd[2140]: 2025-07-07 05:55:46.322 [INFO][4440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" HandleID="k8s-pod-network.557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Workload="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:46.330567 containerd[2140]: 2025-07-07 05:55:46.325 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:46.330567 containerd[2140]: 2025-07-07 05:55:46.327 [INFO][4432] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:46.331531 containerd[2140]: time="2025-07-07T05:55:46.330805827Z" level=info msg="TearDown network for sandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\" successfully" Jul 7 05:55:46.331531 containerd[2140]: time="2025-07-07T05:55:46.330856538Z" level=info msg="StopPodSandbox for \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\" returns successfully" Jul 7 05:55:46.331650 containerd[2140]: time="2025-07-07T05:55:46.331577905Z" level=info msg="RemovePodSandbox for \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\"" Jul 7 05:55:46.331650 containerd[2140]: time="2025-07-07T05:55:46.331627308Z" level=info msg="Forcibly stopping sandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\"" Jul 7 05:55:46.458693 containerd[2140]: 2025-07-07 05:55:46.400 [WARNING][4454] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.202-k8s-csi--node--driver--npz5f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91c80da1-8133-4ef3-be15-3ede4b1f00b5", ResourceVersion:"1292", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.202", ContainerID:"5557da3e5c3dbd333a22fcd2c96c6ced6ca52433149125be620c1fd40742e3aa", Pod:"csi-node-driver-npz5f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie7c5b7a0967", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:55:46.458693 containerd[2140]: 2025-07-07 05:55:46.400 [INFO][4454] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:46.458693 containerd[2140]: 2025-07-07 05:55:46.400 [INFO][4454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" iface="eth0" netns="" Jul 7 05:55:46.458693 containerd[2140]: 2025-07-07 05:55:46.400 [INFO][4454] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:46.458693 containerd[2140]: 2025-07-07 05:55:46.400 [INFO][4454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:46.458693 containerd[2140]: 2025-07-07 05:55:46.437 [INFO][4461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" HandleID="k8s-pod-network.557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Workload="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:46.458693 containerd[2140]: 2025-07-07 05:55:46.438 [INFO][4461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:55:46.458693 containerd[2140]: 2025-07-07 05:55:46.438 [INFO][4461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:55:46.458693 containerd[2140]: 2025-07-07 05:55:46.451 [WARNING][4461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" HandleID="k8s-pod-network.557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Workload="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:46.458693 containerd[2140]: 2025-07-07 05:55:46.451 [INFO][4461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" HandleID="k8s-pod-network.557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Workload="172.31.16.202-k8s-csi--node--driver--npz5f-eth0" Jul 7 05:55:46.458693 containerd[2140]: 2025-07-07 05:55:46.453 [INFO][4461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:55:46.458693 containerd[2140]: 2025-07-07 05:55:46.456 [INFO][4454] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848" Jul 7 05:55:46.458693 containerd[2140]: time="2025-07-07T05:55:46.458644201Z" level=info msg="TearDown network for sandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\" successfully" Jul 7 05:55:46.466799 containerd[2140]: time="2025-07-07T05:55:46.466407689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:46.466799 containerd[2140]: time="2025-07-07T05:55:46.466501782Z" level=info msg="RemovePodSandbox \"557cce8df86d6d0d67ce7f032939308303a86d87e0f09799ea7eaad5fbfc4848\" returns successfully" Jul 7 05:55:46.884387 kubelet[2637]: E0707 05:55:46.884313 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:47.885488 kubelet[2637]: E0707 05:55:47.885407 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:48.886199 kubelet[2637]: E0707 05:55:48.886120 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:49.886863 kubelet[2637]: E0707 05:55:49.886796 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:50.887827 kubelet[2637]: E0707 05:55:50.887774 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:51.888409 kubelet[2637]: E0707 05:55:51.888343 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:52.889253 kubelet[2637]: E0707 05:55:52.889186 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:53.889689 kubelet[2637]: E0707 05:55:53.889624 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:54.890219 kubelet[2637]: E0707 05:55:54.890147 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:55.890411 kubelet[2637]: E0707 05:55:55.890334 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:56.890847 kubelet[2637]: E0707 05:55:56.890782 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:57.891236 kubelet[2637]: E0707 05:55:57.891167 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:58.891826 kubelet[2637]: E0707 05:55:58.891758 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:59.684950 kubelet[2637]: I0707 05:55:59.684850 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=26.46851231 podStartE2EDuration="32.684828747s" podCreationTimestamp="2025-07-07 05:55:27 +0000 UTC" firstStartedPulling="2025-07-07 05:55:28.823980815 +0000 UTC m=+44.047489126" lastFinishedPulling="2025-07-07 05:55:35.040297252 +0000 UTC m=+50.263805563" observedRunningTime="2025-07-07 05:55:35.251661002 +0000 UTC m=+50.475169349" watchObservedRunningTime="2025-07-07 05:55:59.684828747 +0000 UTC m=+74.908337070" Jul 7 05:55:59.779205 kubelet[2637]: I0707 05:55:59.779078 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8z8f\" (UniqueName: \"kubernetes.io/projected/a225fd9a-bbbe-4604-8842-2d682006eddd-kube-api-access-x8z8f\") pod \"test-pod-1\" (UID: \"a225fd9a-bbbe-4604-8842-2d682006eddd\") " pod="default/test-pod-1" Jul 7 05:55:59.779205 kubelet[2637]: I0707 05:55:59.779153 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15ea77c3-08ec-4837-b463-4269b5357427\" (UniqueName: \"kubernetes.io/nfs/a225fd9a-bbbe-4604-8842-2d682006eddd-pvc-15ea77c3-08ec-4837-b463-4269b5357427\") pod \"test-pod-1\" (UID: \"a225fd9a-bbbe-4604-8842-2d682006eddd\") " pod="default/test-pod-1" Jul 7 05:55:59.892484 kubelet[2637]: E0707 05:55:59.892435 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:55:59.924803 kernel: FS-Cache: Loaded Jul 7 05:55:59.982176 kernel: RPC: Registered named UNIX socket transport module. Jul 7 05:55:59.982314 kernel: RPC: Registered udp transport module. Jul 7 05:55:59.982358 kernel: RPC: Registered tcp transport module. Jul 7 05:55:59.984094 kernel: RPC: Registered tcp-with-tls transport module. Jul 7 05:55:59.984361 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 7 05:56:00.330177 kernel: NFS: Registering the id_resolver key type Jul 7 05:56:00.330297 kernel: Key type id_resolver registered Jul 7 05:56:00.330341 kernel: Key type id_legacy registered Jul 7 05:56:00.368327 nfsidmap[4512]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jul 7 05:56:00.374235 nfsidmap[4513]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jul 7 05:56:00.592047 containerd[2140]: time="2025-07-07T05:56:00.591869615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a225fd9a-bbbe-4604-8842-2d682006eddd,Namespace:default,Attempt:0,}" Jul 7 05:56:00.793333 systemd-networkd[1688]: cali5ec59c6bf6e: Link UP Jul 7 05:56:00.793696 systemd-networkd[1688]: cali5ec59c6bf6e: Gained carrier Jul 7 05:56:00.794262 (udev-worker)[4510]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.671 [INFO][4514] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.202-k8s-test--pod--1-eth0 default a225fd9a-bbbe-4604-8842-2d682006eddd 1438 0 2025-07-07 05:55:29 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.16.202 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.202-k8s-test--pod--1-" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.671 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.202-k8s-test--pod--1-eth0" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.714 [INFO][4526] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" HandleID="k8s-pod-network.080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" Workload="172.31.16.202-k8s-test--pod--1-eth0" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.714 [INFO][4526] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" HandleID="k8s-pod-network.080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" Workload="172.31.16.202-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b050), Attrs:map[string]string{"namespace":"default", "node":"172.31.16.202", "pod":"test-pod-1", "timestamp":"2025-07-07 05:56:00.71412616 +0000 UTC"}, Hostname:"172.31.16.202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.714 [INFO][4526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.714 [INFO][4526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.714 [INFO][4526] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.202' Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.728 [INFO][4526] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" host="172.31.16.202" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.737 [INFO][4526] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.16.202" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.747 [INFO][4526] ipam/ipam.go 511: Trying affinity for 192.168.63.192/26 host="172.31.16.202" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.750 [INFO][4526] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.192/26 host="172.31.16.202" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.754 [INFO][4526] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.192/26 host="172.31.16.202" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.754 [INFO][4526] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.63.192/26 handle="k8s-pod-network.080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" host="172.31.16.202" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.756 [INFO][4526] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5 Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.766 [INFO][4526] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.63.192/26 handle="k8s-pod-network.080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" host="172.31.16.202" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.781 [INFO][4526] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.63.196/26] block=192.168.63.192/26 handle="k8s-pod-network.080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" host="172.31.16.202" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.781 [INFO][4526] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.196/26] handle="k8s-pod-network.080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" host="172.31.16.202" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.781 [INFO][4526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.781 [INFO][4526] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.196/26] IPv6=[] ContainerID="080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" HandleID="k8s-pod-network.080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" Workload="172.31.16.202-k8s-test--pod--1-eth0" Jul 7 05:56:00.817991 containerd[2140]: 2025-07-07 05:56:00.787 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.202-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.202-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a225fd9a-bbbe-4604-8842-2d682006eddd", ResourceVersion:"1438", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.202", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:00.820584 containerd[2140]: 2025-07-07 05:56:00.787 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.196/32] ContainerID="080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.202-k8s-test--pod--1-eth0" Jul 7 05:56:00.820584 containerd[2140]: 2025-07-07 05:56:00.787 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.202-k8s-test--pod--1-eth0" Jul 7 05:56:00.820584 containerd[2140]: 2025-07-07 05:56:00.792 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.202-k8s-test--pod--1-eth0" Jul 7 05:56:00.820584 containerd[2140]: 2025-07-07 05:56:00.795 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.202-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.202-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a225fd9a-bbbe-4604-8842-2d682006eddd", ResourceVersion:"1438", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 5, 55, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.202", ContainerID:"080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"2e:ce:76:11:1c:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 05:56:00.820584 containerd[2140]: 2025-07-07 05:56:00.814 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.202-k8s-test--pod--1-eth0" Jul 7 05:56:00.854963 containerd[2140]: time="2025-07-07T05:56:00.854427859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:56:00.854963 containerd[2140]: time="2025-07-07T05:56:00.854538863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:56:00.854963 containerd[2140]: time="2025-07-07T05:56:00.854575985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:00.855490 containerd[2140]: time="2025-07-07T05:56:00.855004014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:56:00.895653 kubelet[2637]: E0707 05:56:00.895583 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:00.952467 containerd[2140]: time="2025-07-07T05:56:00.952412002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a225fd9a-bbbe-4604-8842-2d682006eddd,Namespace:default,Attempt:0,} returns sandbox id \"080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5\"" Jul 7 05:56:00.955403 containerd[2140]: time="2025-07-07T05:56:00.955121771Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 7 05:56:01.282899 containerd[2140]: time="2025-07-07T05:56:01.282159507Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:56:01.284535 containerd[2140]: time="2025-07-07T05:56:01.284487878Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jul 7 05:56:01.290258 containerd[2140]: time="2025-07-07T05:56:01.290194480Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e55a872cbf1b1d996b1d5333796fbe6ec0b825868f3ad30b387fc65697ed40dd\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:9a18b8b1845bf88a3388cde4ec626461965a717ac641198120979e75438b9693\", size \"69950478\" in 335.003827ms" Jul 7 05:56:01.290789 containerd[2140]: time="2025-07-07T05:56:01.290261910Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e55a872cbf1b1d996b1d5333796fbe6ec0b825868f3ad30b387fc65697ed40dd\"" Jul 7 05:56:01.294021 containerd[2140]: time="2025-07-07T05:56:01.293888992Z" level=info msg="CreateContainer within sandbox \"080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 7 05:56:01.326243 containerd[2140]: time="2025-07-07T05:56:01.326108426Z" level=info msg="CreateContainer within sandbox \"080c6ac6ccdb7fedfc64d7a52560019979252d987a624f1cdecc96b41bd2abb5\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ac63960025d305e8ebe104646d7e1d6d275ebd30970db1b892369e7e963bf2c9\"" Jul 7 05:56:01.327656 containerd[2140]: time="2025-07-07T05:56:01.327530339Z" level=info msg="StartContainer for \"ac63960025d305e8ebe104646d7e1d6d275ebd30970db1b892369e7e963bf2c9\"" Jul 7 05:56:01.423498 containerd[2140]: time="2025-07-07T05:56:01.423363730Z" level=info msg="StartContainer for \"ac63960025d305e8ebe104646d7e1d6d275ebd30970db1b892369e7e963bf2c9\" returns successfully" Jul 7 05:56:01.896497 kubelet[2637]: E0707 05:56:01.896444 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:02.340023 kubelet[2637]: I0707 05:56:02.339885 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=33.00297648 podStartE2EDuration="33.339863354s" podCreationTimestamp="2025-07-07 05:55:29 +0000 UTC" firstStartedPulling="2025-07-07 05:56:00.954541213 +0000 UTC m=+76.178049512" lastFinishedPulling="2025-07-07 05:56:01.291428087 +0000 UTC m=+76.514936386" observedRunningTime="2025-07-07 05:56:02.339621495 +0000 UTC m=+77.563129830" watchObservedRunningTime="2025-07-07 05:56:02.339863354 +0000 UTC m=+77.563371665" Jul 7 05:56:02.380593 systemd-networkd[1688]: cali5ec59c6bf6e: Gained IPv6LL Jul 7 05:56:02.897246 kubelet[2637]: E0707 05:56:02.897185 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:03.897560 kubelet[2637]: E0707 05:56:03.897494 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:04.598954 ntpd[2095]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jul 7 05:56:04.599600 ntpd[2095]: 7 Jul 05:56:04 ntpd[2095]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jul 7 05:56:04.898524 kubelet[2637]: E0707 05:56:04.898373 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:05.839081 kubelet[2637]: E0707 05:56:05.839021 2637 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:05.898580 kubelet[2637]: E0707 05:56:05.898517 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:06.899256 kubelet[2637]: E0707 05:56:06.899192 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:07.900109 kubelet[2637]: E0707 05:56:07.900048 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:08.900252 kubelet[2637]: E0707 05:56:08.900188 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:09.901111 kubelet[2637]: E0707 05:56:09.901042 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:10.901272 kubelet[2637]: E0707 05:56:10.901195 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:11.902027 kubelet[2637]: E0707 05:56:11.901957 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:12.902976 kubelet[2637]: E0707 05:56:12.902910 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:13.904090 kubelet[2637]: E0707 05:56:13.904024 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:14.904697 kubelet[2637]: E0707 05:56:14.904630 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:15.905377 kubelet[2637]: E0707 05:56:15.905321 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:16.906508 kubelet[2637]: E0707 05:56:16.906450 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:17.420342 kubelet[2637]: E0707 05:56:17.420272 2637 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.16.202)" Jul 7 05:56:17.906882 kubelet[2637]: E0707 05:56:17.906824 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:18.907672 kubelet[2637]: E0707 05:56:18.907593 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:19.908698 kubelet[2637]: E0707 05:56:19.908636 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:20.908955 kubelet[2637]: E0707 05:56:20.908890 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:21.910081 kubelet[2637]: E0707 05:56:21.910005 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:22.911200 kubelet[2637]: E0707 05:56:22.911137 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:23.912077 kubelet[2637]: E0707 05:56:23.912001 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:24.912962 kubelet[2637]: E0707 05:56:24.912892 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:25.838766 kubelet[2637]: E0707 05:56:25.838675 2637 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:25.913370 kubelet[2637]: E0707 05:56:25.913311 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:26.913895 kubelet[2637]: E0707 05:56:26.913830 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:27.416933 kubelet[2637]: E0707 05:56:27.416875 2637 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.16.202)" Jul 7 05:56:27.914385 kubelet[2637]: E0707 05:56:27.914320 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:28.915504 kubelet[2637]: E0707 05:56:28.915431 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:29.916407 kubelet[2637]: E0707 05:56:29.916336 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:30.916827 kubelet[2637]: E0707 05:56:30.916734 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:31.917055 kubelet[2637]: E0707 05:56:31.916986 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:32.917514 kubelet[2637]: E0707 05:56:32.917441 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:33.917833 kubelet[2637]: E0707 05:56:33.917785 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:34.918402 kubelet[2637]: E0707 05:56:34.918329 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:35.918581 kubelet[2637]: E0707 05:56:35.918507 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:36.919522 kubelet[2637]: E0707 05:56:36.919451 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:37.413673 kubelet[2637]: E0707 05:56:37.413597 2637 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.16.202)" Jul 7 05:56:37.919876 kubelet[2637]: E0707 05:56:37.919833 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:38.920724 kubelet[2637]: E0707 05:56:38.920669 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:39.921643 kubelet[2637]: E0707 05:56:39.921570 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:40.922705 kubelet[2637]: E0707 05:56:40.922618 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:41.923070 kubelet[2637]: E0707 05:56:41.922997 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:42.923487 kubelet[2637]: E0707 05:56:42.923423 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:43.924139 kubelet[2637]: E0707 05:56:43.924074 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:44.925164 kubelet[2637]: E0707 05:56:44.925084 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:45.838638 kubelet[2637]: E0707 05:56:45.838567 2637 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:45.925319 kubelet[2637]: E0707 05:56:45.925255 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:46.925618 kubelet[2637]: E0707 05:56:46.925557 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:47.414242 kubelet[2637]: E0707 05:56:47.414158 2637 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.202?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 7 05:56:47.926013 kubelet[2637]: E0707 05:56:47.925933 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:48.926828 kubelet[2637]: E0707 05:56:48.926736 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:48.982810 kubelet[2637]: E0707 05:56:48.979028 2637 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.202?timeout=10s\": unexpected EOF" Jul 7 05:56:48.982810 kubelet[2637]: I0707 05:56:48.979088 2637 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jul 7 05:56:49.927396 kubelet[2637]: E0707 05:56:49.927334 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:49.993123 kubelet[2637]: E0707 05:56:49.991956 2637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.202?timeout=10s\": dial tcp 172.31.20.28:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.16.202:59814->172.31.20.28:6443: read: connection reset by peer" interval="200ms" Jul 7 05:56:50.193549 kubelet[2637]: E0707 05:56:50.193203 2637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.202?timeout=10s\": dial tcp 172.31.20.28:6443: connect: connection refused" interval="400ms" Jul 7 05:56:50.594048 kubelet[2637]: E0707 05:56:50.593972 2637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.202?timeout=10s\": dial tcp 172.31.20.28:6443: connect: connection refused" interval="800ms" Jul 7 05:56:50.927942 kubelet[2637]: E0707 05:56:50.927784 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:51.396461 kubelet[2637]: E0707 05:56:51.396376 2637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.202?timeout=10s\": dial tcp 172.31.20.28:6443: connect: connection refused" interval="1.6s" Jul 7 05:56:51.928424 kubelet[2637]: E0707 05:56:51.928362 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:52.929382 kubelet[2637]: E0707 05:56:52.929326 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:52.998388 kubelet[2637]: E0707 05:56:52.998287 2637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.202?timeout=10s\": dial tcp 172.31.20.28:6443: connect: connection refused" interval="3.2s" Jul 7 05:56:53.929725 kubelet[2637]: E0707 05:56:53.929665 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:54.930828 kubelet[2637]: E0707 05:56:54.930779 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:55.931623 kubelet[2637]: E0707 05:56:55.931562 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:56.932312 kubelet[2637]: E0707 05:56:56.932245 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:57.932899 kubelet[2637]: E0707 05:56:57.932832 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:58.933453 kubelet[2637]: E0707 05:56:58.933379 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:56:59.933664 kubelet[2637]: E0707 05:56:59.933588 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:57:00.934134 kubelet[2637]: E0707 05:57:00.934056 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:57:01.934870 kubelet[2637]: E0707 05:57:01.934811 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:57:02.935716 kubelet[2637]: E0707 05:57:02.935651 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:57:03.936180 kubelet[2637]: E0707 05:57:03.936106 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:57:04.936657 kubelet[2637]: E0707 05:57:04.936591 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:57:05.838265 kubelet[2637]: E0707 05:57:05.838197 2637 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:57:05.937455 kubelet[2637]: E0707 05:57:05.937401 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:57:06.199193 kubelet[2637]: E0707 05:57:06.199027 2637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.202?timeout=10s\": context deadline exceeded" interval="6.4s" Jul 7 05:57:06.938433 kubelet[2637]: E0707 05:57:06.938351 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:57:07.939202 kubelet[2637]: E0707 05:57:07.939129 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:57:08.939651 kubelet[2637]: E0707 05:57:08.939584 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 7 05:57:09.940325 kubelet[2637]: E0707 05:57:09.940254 2637 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"