May 13 23:42:20.156469 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 13 23:42:20.156514 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 13 23:42:20.156539 kernel: KASLR disabled due to lack of seed May 13 23:42:20.156555 kernel: efi: EFI v2.7 by EDK II May 13 23:42:20.156571 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a733a98 MEMRESERVE=0x78557598 May 13 23:42:20.156586 kernel: secureboot: Secure boot disabled May 13 23:42:20.156604 kernel: ACPI: Early table checksum verification disabled May 13 23:42:20.156620 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 13 23:42:20.156636 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 13 23:42:20.156651 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 13 23:42:20.156674 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 13 23:42:20.158741 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 13 23:42:20.158775 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 13 23:42:20.158792 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 13 23:42:20.158811 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 13 23:42:20.158838 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 13 23:42:20.158855 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 13 23:42:20.158872 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 13 23:42:20.158888 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 13 23:42:20.158904 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 13 23:42:20.158921 kernel: printk: bootconsole [uart0] enabled May 13 23:42:20.158937 kernel: NUMA: Failed to initialise from firmware May 13 23:42:20.158954 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 13 23:42:20.158970 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 13 23:42:20.158986 kernel: Zone ranges: May 13 23:42:20.159003 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 13 23:42:20.159024 kernel: DMA32 empty May 13 23:42:20.159041 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 13 23:42:20.159057 kernel: Movable zone start for each node May 13 23:42:20.159073 kernel: Early memory node ranges May 13 23:42:20.159090 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 13 23:42:20.159106 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 13 23:42:20.159122 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 13 23:42:20.159138 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 13 23:42:20.159155 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 13 23:42:20.159171 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 13 23:42:20.159187 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 13 23:42:20.159203 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 13 23:42:20.159223 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 13 23:42:20.159241 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 13 23:42:20.159265 kernel: psci: probing for conduit method from ACPI. May 13 23:42:20.159281 kernel: psci: PSCIv1.0 detected in firmware. May 13 23:42:20.159299 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:42:20.159320 kernel: psci: Trusted OS migration not required May 13 23:42:20.159337 kernel: psci: SMC Calling Convention v1.1 May 13 23:42:20.159354 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:42:20.159371 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:42:20.159389 kernel: pcpu-alloc: [0] 0 [0] 1 May 13 23:42:20.159406 kernel: Detected PIPT I-cache on CPU0 May 13 23:42:20.159423 kernel: CPU features: detected: GIC system register CPU interface May 13 23:42:20.159439 kernel: CPU features: detected: Spectre-v2 May 13 23:42:20.159456 kernel: CPU features: detected: Spectre-v3a May 13 23:42:20.159473 kernel: CPU features: detected: Spectre-BHB May 13 23:42:20.159490 kernel: CPU features: detected: ARM erratum 1742098 May 13 23:42:20.159507 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 13 23:42:20.159529 kernel: alternatives: applying boot alternatives May 13 23:42:20.159548 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:42:20.159566 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:42:20.159583 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:42:20.159600 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:42:20.159617 kernel: Fallback order for Node 0: 0 May 13 23:42:20.159634 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 13 23:42:20.159651 kernel: Policy zone: Normal May 13 23:42:20.159668 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:42:20.159709 kernel: software IO TLB: area num 2. May 13 23:42:20.159738 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 13 23:42:20.159756 kernel: Memory: 3821048K/4030464K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 209416K reserved, 0K cma-reserved) May 13 23:42:20.159774 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 23:42:20.159791 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:42:20.159809 kernel: rcu: RCU event tracing is enabled. May 13 23:42:20.159827 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 23:42:20.159845 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:42:20.159862 kernel: Tracing variant of Tasks RCU enabled. May 13 23:42:20.159879 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:42:20.159896 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 23:42:20.159913 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:42:20.159935 kernel: GICv3: 96 SPIs implemented May 13 23:42:20.159953 kernel: GICv3: 0 Extended SPIs implemented May 13 23:42:20.159969 kernel: Root IRQ handler: gic_handle_irq May 13 23:42:20.159987 kernel: GICv3: GICv3 features: 16 PPIs May 13 23:42:20.160004 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 13 23:42:20.160020 kernel: ITS [mem 0x10080000-0x1009ffff] May 13 23:42:20.160038 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:42:20.160055 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 13 23:42:20.160072 kernel: GICv3: using LPI property table @0x00000004000d0000 May 13 23:42:20.160089 kernel: ITS: Using hypervisor restricted LPI range [128] May 13 23:42:20.160106 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 13 23:42:20.160123 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:42:20.160145 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 13 23:42:20.160162 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 13 23:42:20.160179 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 13 23:42:20.160196 kernel: Console: colour dummy device 80x25 May 13 23:42:20.160214 kernel: printk: console [tty1] enabled May 13 23:42:20.160231 kernel: ACPI: Core revision 20230628 May 13 23:42:20.160249 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 13 23:42:20.160267 kernel: pid_max: default: 32768 minimum: 301 May 13 23:42:20.160284 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:42:20.160301 kernel: landlock: Up and running. May 13 23:42:20.160323 kernel: SELinux: Initializing. May 13 23:42:20.160340 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:42:20.160358 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:42:20.160375 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:42:20.160393 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:42:20.160410 kernel: rcu: Hierarchical SRCU implementation. May 13 23:42:20.160428 kernel: rcu: Max phase no-delay instances is 400. May 13 23:42:20.160445 kernel: Platform MSI: ITS@0x10080000 domain created May 13 23:42:20.160467 kernel: PCI/MSI: ITS@0x10080000 domain created May 13 23:42:20.160484 kernel: Remapping and enabling EFI services. May 13 23:42:20.160501 kernel: smp: Bringing up secondary CPUs ... May 13 23:42:20.160518 kernel: Detected PIPT I-cache on CPU1 May 13 23:42:20.160536 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 13 23:42:20.160554 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 13 23:42:20.160571 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 13 23:42:20.160588 kernel: smp: Brought up 1 node, 2 CPUs May 13 23:42:20.160605 kernel: SMP: Total of 2 processors activated. May 13 23:42:20.160622 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:42:20.160644 kernel: CPU features: detected: 32-bit EL1 Support May 13 23:42:20.160662 kernel: CPU features: detected: CRC32 instructions May 13 23:42:20.162721 kernel: CPU: All CPU(s) started at EL1 May 13 23:42:20.162752 kernel: alternatives: applying system-wide alternatives May 13 23:42:20.162771 kernel: devtmpfs: initialized May 13 23:42:20.162790 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:42:20.162808 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 23:42:20.162826 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:42:20.162844 kernel: SMBIOS 3.0.0 present. May 13 23:42:20.162867 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 13 23:42:20.162884 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:42:20.162902 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:42:20.162921 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:42:20.162939 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:42:20.162957 kernel: audit: initializing netlink subsys (disabled) May 13 23:42:20.162976 kernel: audit: type=2000 audit(0.228:1): state=initialized audit_enabled=0 res=1 May 13 23:42:20.162999 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:42:20.163017 kernel: cpuidle: using governor menu May 13 23:42:20.163036 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:42:20.163054 kernel: ASID allocator initialised with 65536 entries May 13 23:42:20.163073 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:42:20.163091 kernel: Serial: AMBA PL011 UART driver May 13 23:42:20.163109 kernel: Modules: 17712 pages in range for non-PLT usage May 13 23:42:20.163127 kernel: Modules: 509232 pages in range for PLT usage May 13 23:42:20.163145 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:42:20.163168 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:42:20.163187 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:42:20.163205 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:42:20.163223 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:42:20.163241 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:42:20.163259 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:42:20.163277 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:42:20.163295 kernel: ACPI: Added _OSI(Module Device) May 13 23:42:20.163314 kernel: ACPI: Added _OSI(Processor Device) May 13 23:42:20.163338 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:42:20.163356 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:42:20.163374 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:42:20.163392 kernel: ACPI: Interpreter enabled May 13 23:42:20.163412 kernel: ACPI: Using GIC for interrupt routing May 13 23:42:20.163429 kernel: ACPI: MCFG table detected, 1 entries May 13 23:42:20.163447 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 13 23:42:20.165827 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:42:20.166078 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 23:42:20.166278 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 23:42:20.166474 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 13 23:42:20.166676 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 13 23:42:20.166722 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 13 23:42:20.166742 kernel: acpiphp: Slot [1] registered May 13 23:42:20.166761 kernel: acpiphp: Slot [2] registered May 13 23:42:20.166779 kernel: acpiphp: Slot [3] registered May 13 23:42:20.166803 kernel: acpiphp: Slot [4] registered May 13 23:42:20.166821 kernel: acpiphp: Slot [5] registered May 13 23:42:20.166839 kernel: acpiphp: Slot [6] registered May 13 23:42:20.166856 kernel: acpiphp: Slot [7] registered May 13 23:42:20.166874 kernel: acpiphp: Slot [8] registered May 13 23:42:20.166892 kernel: acpiphp: Slot [9] registered May 13 23:42:20.166910 kernel: acpiphp: Slot [10] registered May 13 23:42:20.166927 kernel: acpiphp: Slot [11] registered May 13 23:42:20.166945 kernel: acpiphp: Slot [12] registered May 13 23:42:20.166963 kernel: acpiphp: Slot [13] registered May 13 23:42:20.166985 kernel: acpiphp: Slot [14] registered May 13 23:42:20.167003 kernel: acpiphp: Slot [15] registered May 13 23:42:20.167020 kernel: acpiphp: Slot [16] registered May 13 23:42:20.167038 kernel: acpiphp: Slot [17] registered May 13 23:42:20.167056 kernel: acpiphp: Slot [18] registered May 13 23:42:20.167073 kernel: acpiphp: Slot [19] registered May 13 23:42:20.167091 kernel: acpiphp: Slot [20] registered May 13 23:42:20.167108 kernel: acpiphp: Slot [21] registered May 13 23:42:20.167126 kernel: acpiphp: Slot [22] registered May 13 23:42:20.167148 kernel: acpiphp: Slot [23] registered May 13 23:42:20.167166 kernel: acpiphp: Slot [24] registered May 13 23:42:20.167184 kernel: acpiphp: Slot [25] registered May 13 23:42:20.167202 kernel: acpiphp: Slot [26] registered May 13 23:42:20.167219 kernel: acpiphp: Slot [27] registered May 13 23:42:20.167237 kernel: acpiphp: Slot [28] registered May 13 23:42:20.167254 kernel: acpiphp: Slot [29] registered May 13 23:42:20.167272 kernel: acpiphp: Slot [30] registered May 13 23:42:20.167290 kernel: acpiphp: Slot [31] registered May 13 23:42:20.167308 kernel: PCI host bridge to bus 0000:00 May 13 23:42:20.167520 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 13 23:42:20.167734 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 23:42:20.167925 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 13 23:42:20.168115 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 13 23:42:20.168369 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 13 23:42:20.168641 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 13 23:42:20.171047 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 13 23:42:20.171303 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 13 23:42:20.171528 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 13 23:42:20.171774 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 13 23:42:20.171998 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 13 23:42:20.172204 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 13 23:42:20.172412 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 13 23:42:20.172633 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 13 23:42:20.174929 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 13 23:42:20.175167 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 13 23:42:20.175371 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 13 23:42:20.175581 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 13 23:42:20.175822 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 13 23:42:20.176030 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 13 23:42:20.176226 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 13 23:42:20.176404 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 23:42:20.176593 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 13 23:42:20.176618 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 23:42:20.176638 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 23:42:20.176657 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 23:42:20.176675 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 23:42:20.178841 kernel: iommu: Default domain type: Translated May 13 23:42:20.178882 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:42:20.178901 kernel: efivars: Registered efivars operations May 13 23:42:20.178920 kernel: vgaarb: loaded May 13 23:42:20.178939 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:42:20.178957 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:42:20.178975 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:42:20.178994 kernel: pnp: PnP ACPI init May 13 23:42:20.179246 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 13 23:42:20.179279 kernel: pnp: PnP ACPI: found 1 devices May 13 23:42:20.179298 kernel: NET: Registered PF_INET protocol family May 13 23:42:20.179316 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:42:20.179335 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:42:20.179354 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:42:20.179372 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:42:20.179390 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:42:20.179409 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:42:20.179427 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:42:20.179450 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:42:20.179469 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:42:20.179486 kernel: PCI: CLS 0 bytes, default 64 May 13 23:42:20.179504 kernel: kvm [1]: HYP mode not available May 13 23:42:20.179523 kernel: Initialise system trusted keyrings May 13 23:42:20.179542 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:42:20.179560 kernel: Key type asymmetric registered May 13 23:42:20.179578 kernel: Asymmetric key parser 'x509' registered May 13 23:42:20.179595 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:42:20.179618 kernel: io scheduler mq-deadline registered May 13 23:42:20.179637 kernel: io scheduler kyber registered May 13 23:42:20.179655 kernel: io scheduler bfq registered May 13 23:42:20.179897 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 13 23:42:20.179925 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 23:42:20.179944 kernel: ACPI: button: Power Button [PWRB] May 13 23:42:20.179962 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 13 23:42:20.179980 kernel: ACPI: button: Sleep Button [SLPB] May 13 23:42:20.180004 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:42:20.180024 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 13 23:42:20.180228 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 13 23:42:20.180253 kernel: printk: console [ttyS0] disabled May 13 23:42:20.180271 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 13 23:42:20.180290 kernel: printk: console [ttyS0] enabled May 13 23:42:20.180308 kernel: printk: bootconsole [uart0] disabled May 13 23:42:20.180326 kernel: thunder_xcv, ver 1.0 May 13 23:42:20.180343 kernel: thunder_bgx, ver 1.0 May 13 23:42:20.180361 kernel: nicpf, ver 1.0 May 13 23:42:20.180384 kernel: nicvf, ver 1.0 May 13 23:42:20.180600 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:42:20.183594 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:42:19 UTC (1747179739) May 13 23:42:20.183632 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:42:20.183651 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 13 23:42:20.183670 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:42:20.183710 kernel: watchdog: Hard watchdog permanently disabled May 13 23:42:20.183743 kernel: NET: Registered PF_INET6 protocol family May 13 23:42:20.183762 kernel: Segment Routing with IPv6 May 13 23:42:20.183780 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:42:20.183798 kernel: NET: Registered PF_PACKET protocol family May 13 23:42:20.183816 kernel: Key type dns_resolver registered May 13 23:42:20.183834 kernel: registered taskstats version 1 May 13 23:42:20.183852 kernel: Loading compiled-in X.509 certificates May 13 23:42:20.183870 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 13 23:42:20.183889 kernel: Key type .fscrypt registered May 13 23:42:20.183906 kernel: Key type fscrypt-provisioning registered May 13 23:42:20.183929 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:42:20.183947 kernel: ima: Allocated hash algorithm: sha1 May 13 23:42:20.183965 kernel: ima: No architecture policies found May 13 23:42:20.183983 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:42:20.184001 kernel: clk: Disabling unused clocks May 13 23:42:20.184019 kernel: Freeing unused kernel memory: 38464K May 13 23:42:20.184036 kernel: Run /init as init process May 13 23:42:20.184054 kernel: with arguments: May 13 23:42:20.184072 kernel: /init May 13 23:42:20.184094 kernel: with environment: May 13 23:42:20.184112 kernel: HOME=/ May 13 23:42:20.184129 kernel: TERM=linux May 13 23:42:20.184147 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:42:20.184167 systemd[1]: Successfully made /usr/ read-only. May 13 23:42:20.184192 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:42:20.184213 systemd[1]: Detected virtualization amazon. May 13 23:42:20.184247 systemd[1]: Detected architecture arm64. May 13 23:42:20.184267 systemd[1]: Running in initrd. May 13 23:42:20.184286 systemd[1]: No hostname configured, using default hostname. May 13 23:42:20.184314 systemd[1]: Hostname set to . May 13 23:42:20.184334 systemd[1]: Initializing machine ID from VM UUID. May 13 23:42:20.184354 systemd[1]: Queued start job for default target initrd.target. May 13 23:42:20.184374 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:42:20.184393 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:42:20.184414 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:42:20.184440 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:42:20.184460 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:42:20.184481 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:42:20.184503 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:42:20.184523 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:42:20.184542 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:42:20.184566 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:42:20.184586 systemd[1]: Reached target paths.target - Path Units. May 13 23:42:20.184606 systemd[1]: Reached target slices.target - Slice Units. May 13 23:42:20.184625 systemd[1]: Reached target swap.target - Swaps. May 13 23:42:20.184644 systemd[1]: Reached target timers.target - Timer Units. May 13 23:42:20.184663 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:42:20.184683 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:42:20.184740 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:42:20.184760 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:42:20.184787 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:42:20.184807 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:42:20.184843 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:42:20.184865 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:42:20.184885 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:42:20.184904 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:42:20.184924 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:42:20.184943 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:42:20.184969 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:42:20.184989 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:42:20.185009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:20.185029 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:42:20.185092 systemd-journald[250]: Collecting audit messages is disabled. May 13 23:42:20.185139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:42:20.185160 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:42:20.185181 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:42:20.185200 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:20.185224 systemd-journald[250]: Journal started May 13 23:42:20.185259 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2c53f61a435d5965798642b80977ab) is 8M, max 75.3M, 67.3M free. May 13 23:42:20.184278 systemd-modules-load[252]: Inserted module 'overlay' May 13 23:42:20.200740 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:42:20.208340 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:42:20.208429 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:42:20.211757 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:42:20.219372 kernel: Bridge firewalling registered May 13 23:42:20.216333 systemd-modules-load[252]: Inserted module 'br_netfilter' May 13 23:42:20.224047 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:42:20.228923 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:42:20.239926 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:42:20.246873 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:42:20.268751 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:42:20.278377 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:42:20.299481 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:42:20.330169 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:42:20.338960 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:42:20.364832 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:42:20.384762 dracut-cmdline[290]: dracut-dracut-053 May 13 23:42:20.392887 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:42:20.422290 systemd-resolved[282]: Positive Trust Anchors: May 13 23:42:20.422333 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:42:20.422396 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:42:20.553727 kernel: SCSI subsystem initialized May 13 23:42:20.562716 kernel: Loading iSCSI transport class v2.0-870. May 13 23:42:20.573726 kernel: iscsi: registered transport (tcp) May 13 23:42:20.596101 kernel: iscsi: registered transport (qla4xxx) May 13 23:42:20.596176 kernel: QLogic iSCSI HBA Driver May 13 23:42:20.657727 kernel: random: crng init done May 13 23:42:20.658079 systemd-resolved[282]: Defaulting to hostname 'linux'. May 13 23:42:20.661576 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:42:20.664042 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:42:20.690977 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:42:20.698222 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:42:20.749384 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:42:20.750837 kernel: device-mapper: uevent: version 1.0.3 May 13 23:42:20.750868 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:42:20.818756 kernel: raid6: neonx8 gen() 6584 MB/s May 13 23:42:20.835743 kernel: raid6: neonx4 gen() 6544 MB/s May 13 23:42:20.852739 kernel: raid6: neonx2 gen() 5460 MB/s May 13 23:42:20.869731 kernel: raid6: neonx1 gen() 3965 MB/s May 13 23:42:20.886720 kernel: raid6: int64x8 gen() 3641 MB/s May 13 23:42:20.903719 kernel: raid6: int64x4 gen() 3714 MB/s May 13 23:42:20.920718 kernel: raid6: int64x2 gen() 3618 MB/s May 13 23:42:20.938530 kernel: raid6: int64x1 gen() 2772 MB/s May 13 23:42:20.938570 kernel: raid6: using algorithm neonx8 gen() 6584 MB/s May 13 23:42:20.956514 kernel: raid6: .... xor() 4714 MB/s, rmw enabled May 13 23:42:20.956561 kernel: raid6: using neon recovery algorithm May 13 23:42:20.963724 kernel: xor: measuring software checksum speed May 13 23:42:20.964719 kernel: 8regs : 11940 MB/sec May 13 23:42:20.966981 kernel: 32regs : 11602 MB/sec May 13 23:42:20.967013 kernel: arm64_neon : 9573 MB/sec May 13 23:42:20.967048 kernel: xor: using function: 8regs (11940 MB/sec) May 13 23:42:21.052170 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:42:21.075792 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:42:21.082981 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:42:21.134096 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 13 23:42:21.144204 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:42:21.159526 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:42:21.206935 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation May 13 23:42:21.266921 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:42:21.274357 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:42:21.402018 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:42:21.414350 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:42:21.468415 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:42:21.473317 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:42:21.487730 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:42:21.490420 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:42:21.496430 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:42:21.548621 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:42:21.614113 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 23:42:21.614205 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 13 23:42:21.629364 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 13 23:42:21.629907 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 13 23:42:21.630338 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:42:21.630476 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:42:21.635927 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:42:21.643428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:42:21.643566 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:21.650657 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:21.656963 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:21.661363 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:21.676734 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:d0:1a:d3:d9:09 May 13 23:42:21.679046 (udev-worker)[531]: Network interface NamePolicy= disabled on kernel command line. May 13 23:42:21.704507 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 13 23:42:21.704574 kernel: nvme nvme0: pci function 0000:00:04.0 May 13 23:42:21.707135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:21.715941 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:42:21.728731 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 13 23:42:21.739124 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:42:21.739210 kernel: GPT:9289727 != 16777215 May 13 23:42:21.742626 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:42:21.742734 kernel: GPT:9289727 != 16777215 May 13 23:42:21.742764 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:42:21.743726 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:42:21.757360 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:42:21.822019 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (520) May 13 23:42:21.841739 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (533) May 13 23:42:21.942089 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 13 23:42:21.966108 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 13 23:42:21.972379 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 13 23:42:22.005226 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 13 23:42:22.069153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 13 23:42:22.076526 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:42:22.106317 disk-uuid[665]: Primary Header is updated. May 13 23:42:22.106317 disk-uuid[665]: Secondary Entries is updated. May 13 23:42:22.106317 disk-uuid[665]: Secondary Header is updated. May 13 23:42:22.115739 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:42:22.125766 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:42:23.142853 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:42:23.143231 disk-uuid[666]: The operation has completed successfully. May 13 23:42:23.345404 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:42:23.347408 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:42:23.425187 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:42:23.452657 sh[925]: Success May 13 23:42:23.477739 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:42:23.595944 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:42:23.606856 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:42:23.617379 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:42:23.644496 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 13 23:42:23.644573 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:42:23.646494 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:42:23.647883 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:42:23.649118 kernel: BTRFS info (device dm-0): using free space tree May 13 23:42:23.688718 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 13 23:42:23.706244 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:42:23.710482 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:42:23.715602 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:42:23.722942 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:42:23.780346 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:23.780430 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 13 23:42:23.781724 kernel: BTRFS info (device nvme0n1p6): using free space tree May 13 23:42:23.790740 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 13 23:42:23.797783 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:23.804193 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:42:23.811946 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:42:23.914362 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:42:23.934908 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:42:24.018334 systemd-networkd[1115]: lo: Link UP May 13 23:42:24.018756 systemd-networkd[1115]: lo: Gained carrier May 13 23:42:24.025069 systemd-networkd[1115]: Enumeration completed May 13 23:42:24.025236 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:42:24.027624 systemd[1]: Reached target network.target - Network. May 13 23:42:24.029557 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:24.029564 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:42:24.034671 systemd-networkd[1115]: eth0: Link UP May 13 23:42:24.034678 systemd-networkd[1115]: eth0: Gained carrier May 13 23:42:24.034950 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:24.055655 systemd-networkd[1115]: eth0: DHCPv4 address 172.31.17.246/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 13 23:42:24.065395 ignition[1040]: Ignition 2.20.0 May 13 23:42:24.065424 ignition[1040]: Stage: fetch-offline May 13 23:42:24.065957 ignition[1040]: no configs at "/usr/lib/ignition/base.d" May 13 23:42:24.070948 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:42:24.065985 ignition[1040]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:42:24.078202 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 23:42:24.067941 ignition[1040]: Ignition finished successfully May 13 23:42:24.119051 ignition[1124]: Ignition 2.20.0 May 13 23:42:24.119563 ignition[1124]: Stage: fetch May 13 23:42:24.120190 ignition[1124]: no configs at "/usr/lib/ignition/base.d" May 13 23:42:24.120217 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:42:24.120420 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:42:24.131018 ignition[1124]: PUT result: OK May 13 23:42:24.134658 ignition[1124]: parsed url from cmdline: "" May 13 23:42:24.134709 ignition[1124]: no config URL provided May 13 23:42:24.134730 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:42:24.134760 ignition[1124]: no config at "/usr/lib/ignition/user.ign" May 13 23:42:24.134799 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:42:24.138816 ignition[1124]: PUT result: OK May 13 23:42:24.140594 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 13 23:42:24.146479 ignition[1124]: GET result: OK May 13 23:42:24.146647 ignition[1124]: parsing config with SHA512: c0ecaf5456187b7d1814056f79d09ada782bce7e7061a187c54f77b584860dba03b300d6a3d0d73eded04f453c4986a373a419863d47bf896c39cb5cb1eee51c May 13 23:42:24.157445 unknown[1124]: fetched base config from "system" May 13 23:42:24.157474 unknown[1124]: fetched base config from "system" May 13 23:42:24.157490 unknown[1124]: fetched user config from "aws" May 13 23:42:24.160332 ignition[1124]: fetch: fetch complete May 13 23:42:24.160346 ignition[1124]: fetch: fetch passed May 13 23:42:24.166580 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 23:42:24.160444 ignition[1124]: Ignition finished successfully May 13 23:42:24.177207 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:42:24.218262 ignition[1130]: Ignition 2.20.0 May 13 23:42:24.218768 ignition[1130]: Stage: kargs May 13 23:42:24.219342 ignition[1130]: no configs at "/usr/lib/ignition/base.d" May 13 23:42:24.219368 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:42:24.219576 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:42:24.223449 ignition[1130]: PUT result: OK May 13 23:42:24.232357 ignition[1130]: kargs: kargs passed May 13 23:42:24.232513 ignition[1130]: Ignition finished successfully May 13 23:42:24.237511 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:42:24.242886 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:42:24.277859 ignition[1136]: Ignition 2.20.0 May 13 23:42:24.277889 ignition[1136]: Stage: disks May 13 23:42:24.279503 ignition[1136]: no configs at "/usr/lib/ignition/base.d" May 13 23:42:24.279536 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:42:24.279941 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:42:24.287177 ignition[1136]: PUT result: OK May 13 23:42:24.304203 ignition[1136]: disks: disks passed May 13 23:42:24.304550 ignition[1136]: Ignition finished successfully May 13 23:42:24.309207 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:42:24.313756 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:42:24.316757 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:42:24.324934 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:42:24.326866 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:42:24.328815 systemd[1]: Reached target basic.target - Basic System. May 13 23:42:24.337730 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:42:24.406957 systemd-fsck[1144]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:42:24.411673 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:42:24.422113 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:42:24.527742 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 13 23:42:24.529211 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:42:24.533256 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:42:24.544311 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:42:24.549897 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:42:24.555106 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:42:24.560761 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:42:24.565040 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:42:24.588227 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:42:24.594395 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:42:24.613748 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1163) May 13 23:42:24.618208 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:24.618274 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 13 23:42:24.619516 kernel: BTRFS info (device nvme0n1p6): using free space tree May 13 23:42:24.635760 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 13 23:42:24.639843 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:42:24.707485 initrd-setup-root[1188]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:42:24.717649 initrd-setup-root[1195]: cut: /sysroot/etc/group: No such file or directory May 13 23:42:24.726082 initrd-setup-root[1202]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:42:24.734397 initrd-setup-root[1209]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:42:24.890982 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:42:24.898355 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:42:24.906012 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:42:24.927138 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:42:24.931771 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:24.977761 ignition[1277]: INFO : Ignition 2.20.0 May 13 23:42:24.977761 ignition[1277]: INFO : Stage: mount May 13 23:42:24.983673 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:42:24.983673 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:42:24.983673 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:42:24.992508 ignition[1277]: INFO : PUT result: OK May 13 23:42:24.985785 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:42:24.998546 ignition[1277]: INFO : mount: mount passed May 13 23:42:25.000131 ignition[1277]: INFO : Ignition finished successfully May 13 23:42:25.003290 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:42:25.011859 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:42:25.035320 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:42:25.076737 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1288) May 13 23:42:25.081736 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:42:25.081814 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 13 23:42:25.081841 kernel: BTRFS info (device nvme0n1p6): using free space tree May 13 23:42:25.086731 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 13 23:42:25.091124 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:42:25.132891 ignition[1304]: INFO : Ignition 2.20.0 May 13 23:42:25.132891 ignition[1304]: INFO : Stage: files May 13 23:42:25.136235 ignition[1304]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:42:25.136235 ignition[1304]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:42:25.136235 ignition[1304]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:42:25.143841 ignition[1304]: INFO : PUT result: OK May 13 23:42:25.159458 ignition[1304]: DEBUG : files: compiled without relabeling support, skipping May 13 23:42:25.161899 ignition[1304]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:42:25.161899 ignition[1304]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:42:25.168016 ignition[1304]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:42:25.170630 ignition[1304]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:42:25.170630 ignition[1304]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:42:25.169381 unknown[1304]: wrote ssh authorized keys file for user: core May 13 23:42:25.178130 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:42:25.178130 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 23:42:25.304516 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:42:25.664856 systemd-networkd[1115]: eth0: Gained IPv6LL May 13 23:42:27.052026 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:42:27.052026 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:42:27.062136 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 13 23:42:27.430170 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:42:27.830044 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 23:42:27.830044 ignition[1304]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:42:27.838297 ignition[1304]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:42:27.838297 ignition[1304]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:42:27.838297 ignition[1304]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:42:27.838297 ignition[1304]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 23:42:27.838297 ignition[1304]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:42:27.838297 ignition[1304]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:42:27.838297 ignition[1304]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:42:27.838297 ignition[1304]: INFO : files: files passed May 13 23:42:27.838297 ignition[1304]: INFO : Ignition finished successfully May 13 23:42:27.864649 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:42:27.871974 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:42:27.885946 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:42:27.910953 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:42:27.912983 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:42:27.930978 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:42:27.930978 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:42:27.938224 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:42:27.944417 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:42:27.949362 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:42:27.955963 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:42:28.032053 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:42:28.034431 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:42:28.040974 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:42:28.043072 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:42:28.045153 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:42:28.053952 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:42:28.095617 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:42:28.103235 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:42:28.148393 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:42:28.151338 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:42:28.158816 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:42:28.161163 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:42:28.161449 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:42:28.169874 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:42:28.173324 systemd[1]: Stopped target basic.target - Basic System. May 13 23:42:28.176934 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:42:28.182235 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:42:28.188723 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:42:28.191167 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:42:28.193523 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:42:28.202444 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:42:28.204895 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:42:28.207397 systemd[1]: Stopped target swap.target - Swaps. May 13 23:42:28.215153 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:42:28.217347 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:42:28.222058 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:42:28.226260 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:42:28.228991 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:42:28.233543 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:42:28.236129 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:42:28.236398 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:42:28.239110 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:42:28.239407 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:42:28.245661 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:42:28.254762 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:42:28.262764 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:42:28.268951 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:42:28.274904 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:42:28.277463 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:42:28.282628 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:42:28.284160 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:42:28.305073 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:42:28.305290 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:42:28.334562 ignition[1359]: INFO : Ignition 2.20.0 May 13 23:42:28.334562 ignition[1359]: INFO : Stage: umount May 13 23:42:28.334562 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:42:28.334562 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:42:28.334562 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:42:28.350988 ignition[1359]: INFO : PUT result: OK May 13 23:42:28.352288 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:42:28.358961 ignition[1359]: INFO : umount: umount passed May 13 23:42:28.361000 ignition[1359]: INFO : Ignition finished successfully May 13 23:42:28.366680 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:42:28.367292 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:42:28.373937 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:42:28.374434 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:42:28.383869 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:42:28.384049 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:42:28.387967 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:42:28.388477 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:42:28.391831 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 23:42:28.391951 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 23:42:28.394366 systemd[1]: Stopped target network.target - Network. May 13 23:42:28.405108 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:42:28.405259 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:42:28.407704 systemd[1]: Stopped target paths.target - Path Units. May 13 23:42:28.409456 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:42:28.421006 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:42:28.423754 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:42:28.430069 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:42:28.432061 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:42:28.432158 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:42:28.434276 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:42:28.434368 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:42:28.436822 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:42:28.436947 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:42:28.439486 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:42:28.439606 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:42:28.442890 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:42:28.443009 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:42:28.445675 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:42:28.448850 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:42:28.468881 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:42:28.470781 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:42:28.480467 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:42:28.481968 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:42:28.482470 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:42:28.497271 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:42:28.500273 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:42:28.502796 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:42:28.512856 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:42:28.516972 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:42:28.517118 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:42:28.519740 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:42:28.519880 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:42:28.525335 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:42:28.525461 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:42:28.529546 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:42:28.529677 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:42:28.538843 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:42:28.545755 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:42:28.545892 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:28.576934 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:42:28.577218 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:42:28.587667 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:42:28.589962 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:42:28.597717 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:42:28.597814 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:42:28.604654 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:42:28.604830 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:42:28.607595 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:42:28.607744 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:42:28.617320 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:42:28.617436 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:42:28.627213 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:42:28.632541 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:42:28.632729 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:42:28.635518 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 23:42:28.635635 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:42:28.642337 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:42:28.642461 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:42:28.651601 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:42:28.653878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:28.671682 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:42:28.671879 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:42:28.676260 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:42:28.677784 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:42:28.681379 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:42:28.681968 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:42:28.695230 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:42:28.705298 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:42:28.740422 systemd[1]: Switching root. May 13 23:42:28.784918 systemd-journald[250]: Journal stopped May 13 23:42:30.978294 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). May 13 23:42:30.978442 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:42:30.978493 kernel: SELinux: policy capability open_perms=1 May 13 23:42:30.978523 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:42:30.978555 kernel: SELinux: policy capability always_check_network=0 May 13 23:42:30.978585 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:42:30.978615 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:42:30.978644 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:42:30.978674 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:42:30.978734 kernel: audit: type=1403 audit(1747179749.083:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:42:30.978783 systemd[1]: Successfully loaded SELinux policy in 54.002ms. May 13 23:42:30.978835 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 30.588ms. May 13 23:42:30.984755 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:42:30.984824 systemd[1]: Detected virtualization amazon. May 13 23:42:30.984861 systemd[1]: Detected architecture arm64. May 13 23:42:30.984896 systemd[1]: Detected first boot. May 13 23:42:30.984928 systemd[1]: Initializing machine ID from VM UUID. May 13 23:42:30.984959 zram_generator::config[1403]: No configuration found. May 13 23:42:30.985006 kernel: NET: Registered PF_VSOCK protocol family May 13 23:42:30.985040 systemd[1]: Populated /etc with preset unit settings. May 13 23:42:30.985075 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:42:30.985108 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:42:30.985138 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:42:30.985180 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:42:30.985219 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:42:30.985249 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:42:30.985286 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:42:30.985322 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:42:30.985353 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:42:30.985385 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:42:30.985419 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:42:30.985458 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:42:30.985490 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:42:30.985522 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:42:30.985554 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:42:30.985588 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:42:30.985620 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:42:30.985652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:42:30.985683 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:42:30.985748 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:42:30.985781 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:42:30.985812 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:42:30.985844 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:42:30.985892 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:42:30.985925 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:42:30.985956 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:42:30.985986 systemd[1]: Reached target slices.target - Slice Units. May 13 23:42:30.986019 systemd[1]: Reached target swap.target - Swaps. May 13 23:42:30.986049 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:42:30.986078 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:42:30.986108 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:42:30.986140 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:42:30.986184 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:42:30.986214 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:42:30.986243 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:42:30.986274 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:42:30.986306 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:42:30.986336 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:42:30.986368 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:42:30.986401 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:42:30.986431 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:42:30.986470 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:42:30.986507 systemd[1]: Reached target machines.target - Containers. May 13 23:42:30.986542 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:42:30.986572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:42:30.986602 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:42:30.986636 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:42:30.986668 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:42:30.994737 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:42:30.995261 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:42:30.995308 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:42:30.995339 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:42:30.995373 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:42:30.995405 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:42:30.998331 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:42:30.998390 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:42:30.998421 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:42:30.998453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:42:30.998497 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:42:30.998527 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:42:30.998556 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:42:30.998589 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:42:30.998622 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:42:30.998654 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:42:30.998715 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:42:30.998758 systemd[1]: Stopped verity-setup.service. May 13 23:42:30.998798 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:42:30.998829 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:42:30.998860 kernel: fuse: init (API version 7.39) May 13 23:42:30.998895 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:42:30.998928 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:42:30.998966 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:42:30.998999 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:42:30.999040 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:42:30.999072 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:42:30.999109 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:42:30.999142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:42:30.999182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:42:30.999218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:42:30.999250 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:42:30.999285 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:42:30.999319 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:42:30.999351 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:42:30.999395 kernel: loop: module loaded May 13 23:42:30.999427 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:42:30.999464 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:42:30.999496 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:42:30.999596 systemd-journald[1482]: Collecting audit messages is disabled. May 13 23:42:30.999654 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:42:31.002155 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:42:31.009856 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:42:31.010891 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:42:31.010964 kernel: ACPI: bus type drm_connector registered May 13 23:42:31.011011 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:42:31.011046 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:42:31.011080 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:42:31.011112 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:42:31.011147 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:42:31.011182 systemd-journald[1482]: Journal started May 13 23:42:31.011244 systemd-journald[1482]: Runtime Journal (/run/log/journal/ec2c53f61a435d5965798642b80977ab) is 8M, max 75.3M, 67.3M free. May 13 23:42:30.331461 systemd[1]: Queued start job for default target multi-user.target. May 13 23:42:30.347091 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 13 23:42:30.348123 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:42:31.016290 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:42:31.021268 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:42:31.030297 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:42:31.033517 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:42:31.082330 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:42:31.084762 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:42:31.089592 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:42:31.096110 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:42:31.102551 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:42:31.105133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:42:31.114125 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:42:31.123068 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:42:31.125472 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:42:31.140124 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:42:31.151272 systemd-tmpfiles[1505]: ACLs are not supported, ignoring. May 13 23:42:31.151307 systemd-tmpfiles[1505]: ACLs are not supported, ignoring. May 13 23:42:31.152149 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:42:31.159193 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:42:31.164958 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:42:31.172293 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:42:31.202468 systemd-journald[1482]: Time spent on flushing to /var/log/journal/ec2c53f61a435d5965798642b80977ab is 133.778ms for 923 entries. May 13 23:42:31.202468 systemd-journald[1482]: System Journal (/var/log/journal/ec2c53f61a435d5965798642b80977ab) is 8M, max 195.6M, 187.6M free. May 13 23:42:31.351955 systemd-journald[1482]: Received client request to flush runtime journal. May 13 23:42:31.352033 kernel: loop0: detected capacity change from 0 to 189592 May 13 23:42:31.254402 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:42:31.265392 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:42:31.293178 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:42:31.296338 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:42:31.313001 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:42:31.357867 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:42:31.395405 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:42:31.403230 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:42:31.439927 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:42:31.467272 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:42:31.470751 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:42:31.479361 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:42:31.486299 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:42:31.506747 kernel: loop1: detected capacity change from 0 to 126448 May 13 23:42:31.567191 udevadm[1560]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:42:31.588808 kernel: loop2: detected capacity change from 0 to 54976 May 13 23:42:31.604505 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. May 13 23:42:31.604551 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. May 13 23:42:31.624573 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:42:31.682742 kernel: loop3: detected capacity change from 0 to 103832 May 13 23:42:31.766767 kernel: loop4: detected capacity change from 0 to 189592 May 13 23:42:31.823845 kernel: loop5: detected capacity change from 0 to 126448 May 13 23:42:31.859230 kernel: loop6: detected capacity change from 0 to 54976 May 13 23:42:31.894869 kernel: loop7: detected capacity change from 0 to 103832 May 13 23:42:31.934742 (sd-merge)[1567]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 13 23:42:31.937623 (sd-merge)[1567]: Merged extensions into '/usr'. May 13 23:42:31.960778 systemd[1]: Reload requested from client PID 1540 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:42:31.961008 systemd[1]: Reloading... May 13 23:42:32.158729 zram_generator::config[1594]: No configuration found. May 13 23:42:32.239732 ldconfig[1534]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:42:32.520465 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:42:32.673665 systemd[1]: Reloading finished in 711 ms. May 13 23:42:32.700510 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:42:32.703897 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:42:32.707421 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:42:32.725421 systemd[1]: Starting ensure-sysext.service... May 13 23:42:32.733026 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:42:32.739250 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:42:32.765088 systemd[1]: Reload requested from client PID 1648 ('systemctl') (unit ensure-sysext.service)... May 13 23:42:32.765117 systemd[1]: Reloading... May 13 23:42:32.829226 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:42:32.829776 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:42:32.831491 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:42:32.832115 systemd-tmpfiles[1649]: ACLs are not supported, ignoring. May 13 23:42:32.832269 systemd-tmpfiles[1649]: ACLs are not supported, ignoring. May 13 23:42:32.850939 systemd-tmpfiles[1649]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:42:32.850968 systemd-tmpfiles[1649]: Skipping /boot May 13 23:42:32.851860 systemd-udevd[1650]: Using default interface naming scheme 'v255'. May 13 23:42:32.901271 systemd-tmpfiles[1649]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:42:32.901298 systemd-tmpfiles[1649]: Skipping /boot May 13 23:42:33.024784 zram_generator::config[1696]: No configuration found. May 13 23:42:33.145956 (udev-worker)[1689]: Network interface NamePolicy= disabled on kernel command line. May 13 23:42:33.372822 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1692) May 13 23:42:33.424335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:42:33.625726 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:42:33.627095 systemd[1]: Reloading finished in 861 ms. May 13 23:42:33.675454 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:42:33.704359 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:42:33.766081 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:42:33.769554 systemd[1]: Finished ensure-sysext.service. May 13 23:42:33.831908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 13 23:42:33.837215 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:42:33.844982 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:42:33.847951 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:42:33.853049 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:42:33.860121 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:42:33.864925 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:42:33.876655 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:42:33.898843 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:42:33.901285 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:42:33.904324 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:42:33.908935 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:42:33.913147 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:42:33.917724 lvm[1847]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:42:33.924094 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:42:33.933648 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:42:33.937014 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:42:33.943321 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:42:33.980164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:42:34.000017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:42:34.002820 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:42:34.011709 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:42:34.012155 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:42:34.015015 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:42:34.026139 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:42:34.026644 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:42:34.034460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:42:34.035014 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:42:34.038909 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:42:34.049415 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:42:34.078508 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:42:34.086887 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:42:34.092480 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:42:34.100367 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:42:34.110357 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:42:34.121710 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:42:34.130852 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:42:34.159834 lvm[1881]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:42:34.194319 augenrules[1891]: No rules May 13 23:42:34.199305 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:42:34.203136 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:42:34.222901 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:42:34.224182 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:42:34.238796 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:42:34.253762 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:42:34.262269 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:42:34.302108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:42:34.394766 systemd-networkd[1860]: lo: Link UP May 13 23:42:34.395340 systemd-networkd[1860]: lo: Gained carrier May 13 23:42:34.399258 systemd-networkd[1860]: Enumeration completed May 13 23:42:34.399641 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:42:34.402470 systemd-networkd[1860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:34.402673 systemd-networkd[1860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:42:34.404861 systemd-networkd[1860]: eth0: Link UP May 13 23:42:34.405322 systemd-networkd[1860]: eth0: Gained carrier May 13 23:42:34.405473 systemd-networkd[1860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:42:34.406035 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:42:34.413166 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:42:34.419875 systemd-networkd[1860]: eth0: DHCPv4 address 172.31.17.246/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 13 23:42:34.422290 systemd-resolved[1861]: Positive Trust Anchors: May 13 23:42:34.422811 systemd-resolved[1861]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:42:34.422877 systemd-resolved[1861]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:42:34.434038 systemd-resolved[1861]: Defaulting to hostname 'linux'. May 13 23:42:34.445350 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:42:34.447995 systemd[1]: Reached target network.target - Network. May 13 23:42:34.450270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:42:34.465477 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:42:34.467855 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:42:34.471730 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:42:34.474660 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:42:34.477226 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:42:34.479674 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:42:34.482093 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:42:34.482146 systemd[1]: Reached target paths.target - Path Units. May 13 23:42:34.483991 systemd[1]: Reached target timers.target - Timer Units. May 13 23:42:34.488593 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:42:34.493642 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:42:34.500326 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:42:34.503194 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:42:34.505802 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:42:34.521933 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:42:34.524574 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:42:34.529804 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:42:34.532831 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:42:34.536232 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:42:34.538719 systemd[1]: Reached target basic.target - Basic System. May 13 23:42:34.540768 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:42:34.540848 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:42:34.542982 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:42:34.553824 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:42:34.567206 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:42:34.574052 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:42:34.587994 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:42:34.597599 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:42:34.607893 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:42:34.617243 systemd[1]: Started ntpd.service - Network Time Service. May 13 23:42:34.628055 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:42:34.643531 jq[1919]: false May 13 23:42:34.641943 systemd[1]: Starting setup-oem.service - Setup OEM... May 13 23:42:34.653137 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:42:34.662193 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:42:34.667947 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:42:34.669667 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:42:34.670854 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:42:34.674068 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:42:34.691420 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:42:34.702639 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:42:34.705809 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:42:34.778294 extend-filesystems[1920]: Found loop4 May 13 23:42:34.809991 extend-filesystems[1920]: Found loop5 May 13 23:42:34.809991 extend-filesystems[1920]: Found loop6 May 13 23:42:34.809991 extend-filesystems[1920]: Found loop7 May 13 23:42:34.809991 extend-filesystems[1920]: Found nvme0n1 May 13 23:42:34.809991 extend-filesystems[1920]: Found nvme0n1p1 May 13 23:42:34.809991 extend-filesystems[1920]: Found nvme0n1p2 May 13 23:42:34.809991 extend-filesystems[1920]: Found nvme0n1p3 May 13 23:42:34.809991 extend-filesystems[1920]: Found usr May 13 23:42:34.809991 extend-filesystems[1920]: Found nvme0n1p4 May 13 23:42:34.809991 extend-filesystems[1920]: Found nvme0n1p6 May 13 23:42:34.809991 extend-filesystems[1920]: Found nvme0n1p7 May 13 23:42:34.809991 extend-filesystems[1920]: Found nvme0n1p9 May 13 23:42:34.809991 extend-filesystems[1920]: Checking size of /dev/nvme0n1p9 May 13 23:42:34.787807 dbus-daemon[1918]: [system] SELinux support is enabled May 13 23:42:34.786661 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:42:34.899785 update_engine[1931]: I20250513 23:42:34.898082 1931 main.cc:92] Flatcar Update Engine starting May 13 23:42:34.818899 dbus-daemon[1918]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1860 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 13 23:42:34.900218 jq[1932]: true May 13 23:42:34.788519 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:42:34.824511 dbus-daemon[1918]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 23:42:34.795004 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:42:34.795066 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:42:34.796192 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:42:34.796246 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:42:34.839617 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 13 23:42:34.855052 (ntainerd)[1940]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:42:34.889611 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:42:34.891772 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:42:34.920383 systemd[1]: Started update-engine.service - Update Engine. May 13 23:42:34.934963 update_engine[1931]: I20250513 23:42:34.920984 1931 update_check_scheduler.cc:74] Next update check in 5m6s May 13 23:42:34.942784 tar[1938]: linux-arm64/helm May 13 23:42:34.962207 ntpd[1924]: ntpd 4.2.8p17@1.4004-o Tue May 13 21:33:15 UTC 2025 (1): Starting May 13 23:42:34.975015 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: ntpd 4.2.8p17@1.4004-o Tue May 13 21:33:15 UTC 2025 (1): Starting May 13 23:42:34.975015 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 13 23:42:34.975015 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: ---------------------------------------------------- May 13 23:42:34.975015 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: ntp-4 is maintained by Network Time Foundation, May 13 23:42:34.975015 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 13 23:42:34.975015 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: corporation. Support and training for ntp-4 are May 13 23:42:34.975015 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: available at https://www.nwtime.org/support May 13 23:42:34.975015 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: ---------------------------------------------------- May 13 23:42:34.975015 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: proto: precision = 0.096 usec (-23) May 13 23:42:34.962267 ntpd[1924]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 13 23:42:34.976058 extend-filesystems[1920]: Resized partition /dev/nvme0n1p9 May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: basedate set to 2025-05-01 May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: gps base set to 2025-05-04 (week 2365) May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: Listen and drop on 0 v6wildcard [::]:123 May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: Listen normally on 2 lo 127.0.0.1:123 May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: Listen normally on 3 eth0 172.31.17.246:123 May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: Listen normally on 4 lo [::1]:123 May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: bind(21) AF_INET6 fe80::4d0:1aff:fed3:d909%2#123 flags 0x11 failed: Cannot assign requested address May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: unable to create socket on eth0 (5) for fe80::4d0:1aff:fed3:d909%2#123 May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: failed to init interface for address fe80::4d0:1aff:fed3:d909%2 May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: Listening on routing socket on fd #21 for interface updates May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 13 23:42:34.992938 ntpd[1924]: 13 May 23:42:34 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 13 23:42:34.962287 ntpd[1924]: ---------------------------------------------------- May 13 23:42:34.993865 extend-filesystems[1967]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:42:34.962308 ntpd[1924]: ntp-4 is maintained by Network Time Foundation, May 13 23:42:34.962327 ntpd[1924]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 13 23:42:34.962345 ntpd[1924]: corporation. Support and training for ntp-4 are May 13 23:42:34.962363 ntpd[1924]: available at https://www.nwtime.org/support May 13 23:42:34.962381 ntpd[1924]: ---------------------------------------------------- May 13 23:42:34.974574 ntpd[1924]: proto: precision = 0.096 usec (-23) May 13 23:42:34.975048 ntpd[1924]: basedate set to 2025-05-01 May 13 23:42:34.975074 ntpd[1924]: gps base set to 2025-05-04 (week 2365) May 13 23:42:34.979577 ntpd[1924]: Listen and drop on 0 v6wildcard [::]:123 May 13 23:42:34.979657 ntpd[1924]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 13 23:42:34.979973 ntpd[1924]: Listen normally on 2 lo 127.0.0.1:123 May 13 23:42:34.980038 ntpd[1924]: Listen normally on 3 eth0 172.31.17.246:123 May 13 23:42:34.980113 ntpd[1924]: Listen normally on 4 lo [::1]:123 May 13 23:42:34.980194 ntpd[1924]: bind(21) AF_INET6 fe80::4d0:1aff:fed3:d909%2#123 flags 0x11 failed: Cannot assign requested address May 13 23:42:34.980233 ntpd[1924]: unable to create socket on eth0 (5) for fe80::4d0:1aff:fed3:d909%2#123 May 13 23:42:34.980260 ntpd[1924]: failed to init interface for address fe80::4d0:1aff:fed3:d909%2 May 13 23:42:34.980316 ntpd[1924]: Listening on routing socket on fd #21 for interface updates May 13 23:42:34.984525 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 13 23:42:34.984573 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 13 23:42:35.018720 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 13 23:42:35.026451 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:42:35.029624 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:42:35.034637 jq[1958]: true May 13 23:42:35.030075 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:42:35.058881 systemd[1]: Finished setup-oem.service - Setup OEM. May 13 23:42:35.169010 coreos-metadata[1917]: May 13 23:42:35.166 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 13 23:42:35.194085 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 13 23:42:35.205384 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1690) May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.173 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.176 INFO Fetch successful May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.177 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.180 INFO Fetch successful May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.184 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.188 INFO Fetch successful May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.189 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.199 INFO Fetch successful May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.200 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.203 INFO Fetch failed with 404: resource not found May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.203 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.205 INFO Fetch successful May 13 23:42:35.205515 coreos-metadata[1917]: May 13 23:42:35.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 13 23:42:35.217993 extend-filesystems[1967]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 13 23:42:35.217993 extend-filesystems[1967]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:42:35.217993 extend-filesystems[1967]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 13 23:42:35.213560 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:42:35.229120 coreos-metadata[1917]: May 13 23:42:35.209 INFO Fetch successful May 13 23:42:35.229120 coreos-metadata[1917]: May 13 23:42:35.209 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 13 23:42:35.229120 coreos-metadata[1917]: May 13 23:42:35.224 INFO Fetch successful May 13 23:42:35.229120 coreos-metadata[1917]: May 13 23:42:35.226 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 13 23:42:35.229379 extend-filesystems[1920]: Resized filesystem in /dev/nvme0n1p9 May 13 23:42:35.214098 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:42:35.238950 coreos-metadata[1917]: May 13 23:42:35.238 INFO Fetch successful May 13 23:42:35.238950 coreos-metadata[1917]: May 13 23:42:35.238 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 13 23:42:35.246669 coreos-metadata[1917]: May 13 23:42:35.246 INFO Fetch successful May 13 23:42:35.381845 bash[2018]: Updated "/home/core/.ssh/authorized_keys" May 13 23:42:35.388489 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:42:35.395961 systemd[1]: Starting sshkeys.service... May 13 23:42:35.432395 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:42:35.435247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:42:35.521756 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 23:42:35.531842 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 23:42:35.541520 systemd-logind[1930]: Watching system buttons on /dev/input/event0 (Power Button) May 13 23:42:35.541565 systemd-logind[1930]: Watching system buttons on /dev/input/event1 (Sleep Button) May 13 23:42:35.549216 systemd-logind[1930]: New seat seat0. May 13 23:42:35.555762 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:42:35.649499 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 13 23:42:35.655067 dbus-daemon[1918]: [system] Successfully activated service 'org.freedesktop.hostname1' May 13 23:42:35.664072 dbus-daemon[1918]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1950 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 13 23:42:35.678035 systemd[1]: Starting polkit.service - Authorization Manager... May 13 23:42:35.712710 containerd[1940]: time="2025-05-13T23:42:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:42:35.716194 containerd[1940]: time="2025-05-13T23:42:35.713298876Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:42:35.757344 containerd[1940]: time="2025-05-13T23:42:35.753941712Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.064µs" May 13 23:42:35.757344 containerd[1940]: time="2025-05-13T23:42:35.754008708Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:42:35.757344 containerd[1940]: time="2025-05-13T23:42:35.754049304Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:42:35.757344 containerd[1940]: time="2025-05-13T23:42:35.754349940Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:42:35.757344 containerd[1940]: time="2025-05-13T23:42:35.754387368Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:42:35.757344 containerd[1940]: time="2025-05-13T23:42:35.754440360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:42:35.757344 containerd[1940]: time="2025-05-13T23:42:35.754562748Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:42:35.757344 containerd[1940]: time="2025-05-13T23:42:35.754590900Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:42:35.757344 containerd[1940]: time="2025-05-13T23:42:35.755035944Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:42:35.757344 containerd[1940]: time="2025-05-13T23:42:35.755074236Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:42:35.757344 containerd[1940]: time="2025-05-13T23:42:35.755104128Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:42:35.757344 containerd[1940]: time="2025-05-13T23:42:35.755129244Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:42:35.758130 containerd[1940]: time="2025-05-13T23:42:35.755297724Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:42:35.758130 containerd[1940]: time="2025-05-13T23:42:35.757373916Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:42:35.758130 containerd[1940]: time="2025-05-13T23:42:35.757482672Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:42:35.758130 containerd[1940]: time="2025-05-13T23:42:35.757508940Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:42:35.758130 containerd[1940]: time="2025-05-13T23:42:35.757751244Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.759665808Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.759877308Z" level=info msg="metadata content store policy set" policy=shared May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.766472148Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.766587576Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.767044308Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.767109180Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.767147112Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.767187696Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.767222208Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.767253252Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.767287584Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.767317980Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.767345028Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.767387412Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:42:35.776363 containerd[1940]: time="2025-05-13T23:42:35.767677188Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.767769048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.767813976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.767845140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.767873628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.767900556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.767929332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.767956752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.767986848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.768028392Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.768055368Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.768282048Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.768326760Z" level=info msg="Start snapshots syncer" May 13 23:42:35.777101 containerd[1940]: time="2025-05-13T23:42:35.769639272Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:42:35.777610 containerd[1940]: time="2025-05-13T23:42:35.770087124Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:42:35.777610 containerd[1940]: time="2025-05-13T23:42:35.770179548Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.770318004Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.770563572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.770612952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.770643024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.770669580Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.772302228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.772361280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.772398432Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.772460700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.772493988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.772528020Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.774544488Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.774729492Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:42:35.777871 containerd[1940]: time="2025-05-13T23:42:35.774755748Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:42:35.782403 containerd[1940]: time="2025-05-13T23:42:35.774782364Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:42:35.782403 containerd[1940]: time="2025-05-13T23:42:35.774808128Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:42:35.782403 containerd[1940]: time="2025-05-13T23:42:35.774843636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:42:35.782403 containerd[1940]: time="2025-05-13T23:42:35.774872052Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:42:35.782403 containerd[1940]: time="2025-05-13T23:42:35.774932484Z" level=info msg="runtime interface created" May 13 23:42:35.782403 containerd[1940]: time="2025-05-13T23:42:35.774948576Z" level=info msg="created NRI interface" May 13 23:42:35.782403 containerd[1940]: time="2025-05-13T23:42:35.774975588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:42:35.782403 containerd[1940]: time="2025-05-13T23:42:35.775006272Z" level=info msg="Connect containerd service" May 13 23:42:35.782403 containerd[1940]: time="2025-05-13T23:42:35.775086588Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:42:35.782403 containerd[1940]: time="2025-05-13T23:42:35.778390824Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:42:35.850523 polkitd[2069]: Started polkitd version 121 May 13 23:42:35.851945 locksmithd[1962]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:42:35.908846 polkitd[2069]: Loading rules from directory /etc/polkit-1/rules.d May 13 23:42:35.908973 polkitd[2069]: Loading rules from directory /usr/share/polkit-1/rules.d May 13 23:42:35.911768 polkitd[2069]: Finished loading, compiling and executing 2 rules May 13 23:42:35.920849 dbus-daemon[1918]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 13 23:42:35.921487 systemd[1]: Started polkit.service - Authorization Manager. May 13 23:42:35.937964 polkitd[2069]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 13 23:42:35.963511 ntpd[1924]: bind(24) AF_INET6 fe80::4d0:1aff:fed3:d909%2#123 flags 0x11 failed: Cannot assign requested address May 13 23:42:35.973951 ntpd[1924]: 13 May 23:42:35 ntpd[1924]: bind(24) AF_INET6 fe80::4d0:1aff:fed3:d909%2#123 flags 0x11 failed: Cannot assign requested address May 13 23:42:35.973951 ntpd[1924]: 13 May 23:42:35 ntpd[1924]: unable to create socket on eth0 (6) for fe80::4d0:1aff:fed3:d909%2#123 May 13 23:42:35.973951 ntpd[1924]: 13 May 23:42:35 ntpd[1924]: failed to init interface for address fe80::4d0:1aff:fed3:d909%2 May 13 23:42:35.963573 ntpd[1924]: unable to create socket on eth0 (6) for fe80::4d0:1aff:fed3:d909%2#123 May 13 23:42:35.963600 ntpd[1924]: failed to init interface for address fe80::4d0:1aff:fed3:d909%2 May 13 23:42:36.035092 coreos-metadata[2048]: May 13 23:42:36.034 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 13 23:42:36.037813 coreos-metadata[2048]: May 13 23:42:36.036 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 13 23:42:36.043825 coreos-metadata[2048]: May 13 23:42:36.039 INFO Fetch successful May 13 23:42:36.043825 coreos-metadata[2048]: May 13 23:42:36.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 23:42:36.045442 coreos-metadata[2048]: May 13 23:42:36.044 INFO Fetch successful May 13 23:42:36.049587 unknown[2048]: wrote ssh authorized keys file for user: core May 13 23:42:36.080346 systemd-hostnamed[1950]: Hostname set to (transient) May 13 23:42:36.080810 systemd-resolved[1861]: System hostname changed to 'ip-172-31-17-246'. May 13 23:42:36.138253 update-ssh-keys[2119]: Updated "/home/core/.ssh/authorized_keys" May 13 23:42:36.143534 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 23:42:36.157592 systemd[1]: Finished sshkeys.service. May 13 23:42:36.253455 containerd[1940]: time="2025-05-13T23:42:36.249742667Z" level=info msg="Start subscribing containerd event" May 13 23:42:36.253455 containerd[1940]: time="2025-05-13T23:42:36.249842183Z" level=info msg="Start recovering state" May 13 23:42:36.253455 containerd[1940]: time="2025-05-13T23:42:36.249992075Z" level=info msg="Start event monitor" May 13 23:42:36.253455 containerd[1940]: time="2025-05-13T23:42:36.250019195Z" level=info msg="Start cni network conf syncer for default" May 13 23:42:36.253455 containerd[1940]: time="2025-05-13T23:42:36.250039139Z" level=info msg="Start streaming server" May 13 23:42:36.253455 containerd[1940]: time="2025-05-13T23:42:36.250064267Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:42:36.253455 containerd[1940]: time="2025-05-13T23:42:36.250083359Z" level=info msg="runtime interface starting up..." May 13 23:42:36.253455 containerd[1940]: time="2025-05-13T23:42:36.250101467Z" level=info msg="starting plugins..." May 13 23:42:36.253455 containerd[1940]: time="2025-05-13T23:42:36.250130771Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:42:36.253455 containerd[1940]: time="2025-05-13T23:42:36.250181495Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:42:36.253455 containerd[1940]: time="2025-05-13T23:42:36.250322327Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:42:36.253455 containerd[1940]: time="2025-05-13T23:42:36.250474427Z" level=info msg="containerd successfully booted in 0.540102s" May 13 23:42:36.250856 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:42:36.353947 systemd-networkd[1860]: eth0: Gained IPv6LL May 13 23:42:36.362805 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:42:36.366649 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:42:36.374337 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 13 23:42:36.380761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:36.386612 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:42:36.510799 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:42:36.559382 amazon-ssm-agent[2134]: Initializing new seelog logger May 13 23:42:36.560389 amazon-ssm-agent[2134]: New Seelog Logger Creation Complete May 13 23:42:36.560732 amazon-ssm-agent[2134]: 2025/05/13 23:42:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:42:36.560834 amazon-ssm-agent[2134]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:42:36.561622 amazon-ssm-agent[2134]: 2025/05/13 23:42:36 processing appconfig overrides May 13 23:42:36.562316 amazon-ssm-agent[2134]: 2025/05/13 23:42:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:42:36.562417 amazon-ssm-agent[2134]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:42:36.562606 amazon-ssm-agent[2134]: 2025/05/13 23:42:36 processing appconfig overrides May 13 23:42:36.563021 amazon-ssm-agent[2134]: 2025/05/13 23:42:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:42:36.563105 amazon-ssm-agent[2134]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:42:36.563282 amazon-ssm-agent[2134]: 2025/05/13 23:42:36 processing appconfig overrides May 13 23:42:36.563613 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO Proxy environment variables: May 13 23:42:36.566107 amazon-ssm-agent[2134]: 2025/05/13 23:42:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:42:36.567716 amazon-ssm-agent[2134]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:42:36.567716 amazon-ssm-agent[2134]: 2025/05/13 23:42:36 processing appconfig overrides May 13 23:42:36.663777 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO no_proxy: May 13 23:42:36.764070 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO https_proxy: May 13 23:42:36.862505 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO http_proxy: May 13 23:42:36.962802 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO Checking if agent identity type OnPrem can be assumed May 13 23:42:37.059675 tar[1938]: linux-arm64/LICENSE May 13 23:42:37.059675 tar[1938]: linux-arm64/README.md May 13 23:42:37.064104 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO Checking if agent identity type EC2 can be assumed May 13 23:42:37.097841 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:42:37.164417 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO Agent will take identity from EC2 May 13 23:42:37.259607 sshd_keygen[1937]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:42:37.262453 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO [amazon-ssm-agent] using named pipe channel for IPC May 13 23:42:37.303534 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:42:37.313903 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:42:37.319832 systemd[1]: Started sshd@0-172.31.17.246:22-139.178.89.65:36770.service - OpenSSH per-connection server daemon (139.178.89.65:36770). May 13 23:42:37.362846 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO [amazon-ssm-agent] using named pipe channel for IPC May 13 23:42:37.368214 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:42:37.369831 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:42:37.378091 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:42:37.430932 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:42:37.438634 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:42:37.449493 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:42:37.452037 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:42:37.464852 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO [amazon-ssm-agent] using named pipe channel for IPC May 13 23:42:37.564527 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 13 23:42:37.593876 sshd[2166]: Accepted publickey for core from 139.178.89.65 port 36770 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:42:37.599385 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:37.617435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:42:37.624135 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:42:37.653045 systemd-logind[1930]: New session 1 of user core. May 13 23:42:37.667898 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 13 23:42:37.687348 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:42:37.698150 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:42:37.726459 (systemd)[2177]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:42:37.731572 systemd-logind[1930]: New session c1 of user core. May 13 23:42:37.767560 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO [amazon-ssm-agent] Starting Core Agent May 13 23:42:37.868812 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 13 23:42:37.968518 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO [Registrar] Starting registrar module May 13 23:42:38.068474 amazon-ssm-agent[2134]: 2025-05-13 23:42:36 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 13 23:42:38.073714 systemd[2177]: Queued start job for default target default.target. May 13 23:42:38.080279 systemd[2177]: Created slice app.slice - User Application Slice. May 13 23:42:38.080558 systemd[2177]: Reached target paths.target - Paths. May 13 23:42:38.080938 systemd[2177]: Reached target timers.target - Timers. May 13 23:42:38.083593 systemd[2177]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:42:38.108974 amazon-ssm-agent[2134]: 2025-05-13 23:42:38 INFO [EC2Identity] EC2 registration was successful. May 13 23:42:38.108974 amazon-ssm-agent[2134]: 2025-05-13 23:42:38 INFO [CredentialRefresher] credentialRefresher has started May 13 23:42:38.108974 amazon-ssm-agent[2134]: 2025-05-13 23:42:38 INFO [CredentialRefresher] Starting credentials refresher loop May 13 23:42:38.108974 amazon-ssm-agent[2134]: 2025-05-13 23:42:38 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 13 23:42:38.127038 systemd[2177]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:42:38.127288 systemd[2177]: Reached target sockets.target - Sockets. May 13 23:42:38.128284 systemd[2177]: Reached target basic.target - Basic System. May 13 23:42:38.128423 systemd[2177]: Reached target default.target - Main User Target. May 13 23:42:38.128489 systemd[2177]: Startup finished in 378ms. May 13 23:42:38.128926 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:42:38.140041 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:42:38.169099 amazon-ssm-agent[2134]: 2025-05-13 23:42:38 INFO [CredentialRefresher] Next credential rotation will be in 32.04165864746667 minutes May 13 23:42:38.298152 systemd[1]: Started sshd@1-172.31.17.246:22-139.178.89.65:39462.service - OpenSSH per-connection server daemon (139.178.89.65:39462). May 13 23:42:38.415938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:38.419192 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:42:38.425855 systemd[1]: Startup finished in 1.143s (kernel) + 9.293s (initrd) + 9.394s (userspace) = 19.832s. May 13 23:42:38.430169 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:42:38.512674 sshd[2188]: Accepted publickey for core from 139.178.89.65 port 39462 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:42:38.515323 sshd-session[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:38.524534 systemd-logind[1930]: New session 2 of user core. May 13 23:42:38.530981 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:42:38.657723 sshd[2200]: Connection closed by 139.178.89.65 port 39462 May 13 23:42:38.657847 sshd-session[2188]: pam_unix(sshd:session): session closed for user core May 13 23:42:38.666404 systemd[1]: sshd@1-172.31.17.246:22-139.178.89.65:39462.service: Deactivated successfully. May 13 23:42:38.671962 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:42:38.674089 systemd-logind[1930]: Session 2 logged out. Waiting for processes to exit. May 13 23:42:38.675678 systemd-logind[1930]: Removed session 2. May 13 23:42:38.690934 systemd[1]: Started sshd@2-172.31.17.246:22-139.178.89.65:39478.service - OpenSSH per-connection server daemon (139.178.89.65:39478). May 13 23:42:38.888710 sshd[2210]: Accepted publickey for core from 139.178.89.65 port 39478 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:42:38.891316 sshd-session[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:38.901848 systemd-logind[1930]: New session 3 of user core. May 13 23:42:38.909016 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:42:38.962969 ntpd[1924]: Listen normally on 7 eth0 [fe80::4d0:1aff:fed3:d909%2]:123 May 13 23:42:38.963512 ntpd[1924]: 13 May 23:42:38 ntpd[1924]: Listen normally on 7 eth0 [fe80::4d0:1aff:fed3:d909%2]:123 May 13 23:42:39.029973 sshd[2212]: Connection closed by 139.178.89.65 port 39478 May 13 23:42:39.029540 sshd-session[2210]: pam_unix(sshd:session): session closed for user core May 13 23:42:39.037427 systemd[1]: sshd@2-172.31.17.246:22-139.178.89.65:39478.service: Deactivated successfully. May 13 23:42:39.040861 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:42:39.043047 systemd-logind[1930]: Session 3 logged out. Waiting for processes to exit. May 13 23:42:39.046420 systemd-logind[1930]: Removed session 3. May 13 23:42:39.070170 systemd[1]: Started sshd@3-172.31.17.246:22-139.178.89.65:39480.service - OpenSSH per-connection server daemon (139.178.89.65:39480). May 13 23:42:39.140282 amazon-ssm-agent[2134]: 2025-05-13 23:42:39 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 13 23:42:39.242016 amazon-ssm-agent[2134]: 2025-05-13 23:42:39 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2223) started May 13 23:42:39.284440 sshd[2219]: Accepted publickey for core from 139.178.89.65 port 39480 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:42:39.286825 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:39.349859 amazon-ssm-agent[2134]: 2025-05-13 23:42:39 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 13 23:42:39.370295 systemd-logind[1930]: New session 4 of user core. May 13 23:42:39.379288 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:42:39.479245 kubelet[2195]: E0513 23:42:39.479123 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:42:39.482492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:42:39.482869 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:42:39.485083 systemd[1]: kubelet.service: Consumed 1.292s CPU time, 236M memory peak. May 13 23:42:39.515907 sshd[2231]: Connection closed by 139.178.89.65 port 39480 May 13 23:42:39.517165 sshd-session[2219]: pam_unix(sshd:session): session closed for user core May 13 23:42:39.524127 systemd-logind[1930]: Session 4 logged out. Waiting for processes to exit. May 13 23:42:39.525653 systemd[1]: sshd@3-172.31.17.246:22-139.178.89.65:39480.service: Deactivated successfully. May 13 23:42:39.528960 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:42:39.530780 systemd-logind[1930]: Removed session 4. May 13 23:42:39.556074 systemd[1]: Started sshd@4-172.31.17.246:22-139.178.89.65:39484.service - OpenSSH per-connection server daemon (139.178.89.65:39484). May 13 23:42:39.753200 sshd[2242]: Accepted publickey for core from 139.178.89.65 port 39484 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:42:39.755597 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:39.764015 systemd-logind[1930]: New session 5 of user core. May 13 23:42:39.771943 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:42:39.891932 sudo[2245]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:42:39.892562 sudo[2245]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:42:39.911309 sudo[2245]: pam_unix(sudo:session): session closed for user root May 13 23:42:39.935766 sshd[2244]: Connection closed by 139.178.89.65 port 39484 May 13 23:42:39.936846 sshd-session[2242]: pam_unix(sshd:session): session closed for user core May 13 23:42:39.942444 systemd-logind[1930]: Session 5 logged out. Waiting for processes to exit. May 13 23:42:39.943960 systemd[1]: sshd@4-172.31.17.246:22-139.178.89.65:39484.service: Deactivated successfully. May 13 23:42:39.946863 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:42:39.950356 systemd-logind[1930]: Removed session 5. May 13 23:42:39.970454 systemd[1]: Started sshd@5-172.31.17.246:22-139.178.89.65:39488.service - OpenSSH per-connection server daemon (139.178.89.65:39488). May 13 23:42:40.170104 sshd[2251]: Accepted publickey for core from 139.178.89.65 port 39488 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:42:40.172312 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:40.180993 systemd-logind[1930]: New session 6 of user core. May 13 23:42:40.191944 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:42:40.297654 sudo[2255]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:42:40.298411 sudo[2255]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:42:40.305664 sudo[2255]: pam_unix(sudo:session): session closed for user root May 13 23:42:40.315528 sudo[2254]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:42:40.316202 sudo[2254]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:42:40.332809 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:42:40.397148 augenrules[2277]: No rules May 13 23:42:40.399786 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:42:40.400235 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:42:40.402616 sudo[2254]: pam_unix(sudo:session): session closed for user root May 13 23:42:40.426540 sshd[2253]: Connection closed by 139.178.89.65 port 39488 May 13 23:42:40.427282 sshd-session[2251]: pam_unix(sshd:session): session closed for user core May 13 23:42:40.434141 systemd[1]: sshd@5-172.31.17.246:22-139.178.89.65:39488.service: Deactivated successfully. May 13 23:42:40.437211 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:42:40.438417 systemd-logind[1930]: Session 6 logged out. Waiting for processes to exit. May 13 23:42:40.440219 systemd-logind[1930]: Removed session 6. May 13 23:42:40.464795 systemd[1]: Started sshd@6-172.31.17.246:22-139.178.89.65:39502.service - OpenSSH per-connection server daemon (139.178.89.65:39502). May 13 23:42:40.657415 sshd[2286]: Accepted publickey for core from 139.178.89.65 port 39502 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:42:40.660024 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:42:40.669020 systemd-logind[1930]: New session 7 of user core. May 13 23:42:40.675967 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:42:40.780109 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:42:40.780831 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:42:41.272497 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:42:41.289203 (dockerd)[2307]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:42:41.628653 dockerd[2307]: time="2025-05-13T23:42:41.628012985Z" level=info msg="Starting up" May 13 23:42:41.633654 dockerd[2307]: time="2025-05-13T23:42:41.633496205Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:42:41.815471 dockerd[2307]: time="2025-05-13T23:42:41.815168706Z" level=info msg="Loading containers: start." May 13 23:42:41.534720 systemd-resolved[1861]: Clock change detected. Flushing caches. May 13 23:42:41.552282 systemd-journald[1482]: Time jumped backwards, rotating. May 13 23:42:41.635265 kernel: Initializing XFRM netlink socket May 13 23:42:41.638717 (udev-worker)[2331]: Network interface NamePolicy= disabled on kernel command line. May 13 23:42:41.755763 systemd-networkd[1860]: docker0: Link UP May 13 23:42:41.832910 dockerd[2307]: time="2025-05-13T23:42:41.832518185Z" level=info msg="Loading containers: done." May 13 23:42:41.870952 dockerd[2307]: time="2025-05-13T23:42:41.870877697Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:42:41.871115 dockerd[2307]: time="2025-05-13T23:42:41.871015697Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:42:41.871294 dockerd[2307]: time="2025-05-13T23:42:41.871258853Z" level=info msg="Daemon has completed initialization" May 13 23:42:41.925221 dockerd[2307]: time="2025-05-13T23:42:41.924972269Z" level=info msg="API listen on /run/docker.sock" May 13 23:42:41.925086 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:42:43.067439 containerd[1940]: time="2025-05-13T23:42:43.067358907Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 23:42:43.670743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993760717.mount: Deactivated successfully. May 13 23:42:44.898663 containerd[1940]: time="2025-05-13T23:42:44.898583372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:44.900436 containerd[1940]: time="2025-05-13T23:42:44.900346124Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554608" May 13 23:42:44.901426 containerd[1940]: time="2025-05-13T23:42:44.901338752Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:44.906142 containerd[1940]: time="2025-05-13T23:42:44.906044684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:44.908290 containerd[1940]: time="2025-05-13T23:42:44.908029496Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.840615197s" May 13 23:42:44.908290 containerd[1940]: time="2025-05-13T23:42:44.908086364Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 13 23:42:44.909147 containerd[1940]: time="2025-05-13T23:42:44.909085820Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 23:42:46.233266 containerd[1940]: time="2025-05-13T23:42:46.233093731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:46.234852 containerd[1940]: time="2025-05-13T23:42:46.234800635Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458978" May 13 23:42:46.236771 containerd[1940]: time="2025-05-13T23:42:46.236685223Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:46.241325 containerd[1940]: time="2025-05-13T23:42:46.241264939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:46.245201 containerd[1940]: time="2025-05-13T23:42:46.244962511Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.335666127s" May 13 23:42:46.245201 containerd[1940]: time="2025-05-13T23:42:46.245015203Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 13 23:42:46.246042 containerd[1940]: time="2025-05-13T23:42:46.245998351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 23:42:47.512872 containerd[1940]: time="2025-05-13T23:42:47.512809485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:47.514629 containerd[1940]: time="2025-05-13T23:42:47.514555449Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125813" May 13 23:42:47.516507 containerd[1940]: time="2025-05-13T23:42:47.516416217Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:47.521204 containerd[1940]: time="2025-05-13T23:42:47.521108673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:47.523266 containerd[1940]: time="2025-05-13T23:42:47.523155837Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.277070846s" May 13 23:42:47.523266 containerd[1940]: time="2025-05-13T23:42:47.523206309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 13 23:42:47.524029 containerd[1940]: time="2025-05-13T23:42:47.523981161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 23:42:48.762009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3791204481.mount: Deactivated successfully. May 13 23:42:49.267750 containerd[1940]: time="2025-05-13T23:42:49.267456718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:49.268792 containerd[1940]: time="2025-05-13T23:42:49.268719130Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871917" May 13 23:42:49.269676 containerd[1940]: time="2025-05-13T23:42:49.269587810Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:49.272619 containerd[1940]: time="2025-05-13T23:42:49.272541478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:49.274176 containerd[1940]: time="2025-05-13T23:42:49.273866602Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.749827673s" May 13 23:42:49.274176 containerd[1940]: time="2025-05-13T23:42:49.273914854Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 13 23:42:49.274928 containerd[1940]: time="2025-05-13T23:42:49.274759582Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:42:49.304975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:42:49.308141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:42:49.620859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:42:49.637355 (kubelet)[2586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:42:49.713493 kubelet[2586]: E0513 23:42:49.713410 2586 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:42:49.726007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:42:49.727033 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:42:49.727620 systemd[1]: kubelet.service: Consumed 302ms CPU time, 95.8M memory peak. May 13 23:42:49.838191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3416668985.mount: Deactivated successfully. May 13 23:42:50.961295 containerd[1940]: time="2025-05-13T23:42:50.960636650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:50.962993 containerd[1940]: time="2025-05-13T23:42:50.962901650Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 13 23:42:50.965694 containerd[1940]: time="2025-05-13T23:42:50.965612966Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:50.971259 containerd[1940]: time="2025-05-13T23:42:50.971145998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:50.973272 containerd[1940]: time="2025-05-13T23:42:50.973030934Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.698218372s" May 13 23:42:50.973272 containerd[1940]: time="2025-05-13T23:42:50.973082294Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 23:42:50.973900 containerd[1940]: time="2025-05-13T23:42:50.973863254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:42:51.480411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1688147550.mount: Deactivated successfully. May 13 23:42:51.491296 containerd[1940]: time="2025-05-13T23:42:51.490923973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:42:51.493369 containerd[1940]: time="2025-05-13T23:42:51.493276597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 13 23:42:51.495956 containerd[1940]: time="2025-05-13T23:42:51.495875329Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:42:51.500550 containerd[1940]: time="2025-05-13T23:42:51.500497789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:42:51.502004 containerd[1940]: time="2025-05-13T23:42:51.501779269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 527.763327ms" May 13 23:42:51.502004 containerd[1940]: time="2025-05-13T23:42:51.501833233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 23:42:51.502922 containerd[1940]: time="2025-05-13T23:42:51.502392277Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 23:42:52.059638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737627658.mount: Deactivated successfully. May 13 23:42:54.042332 containerd[1940]: time="2025-05-13T23:42:54.042270242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:54.047273 containerd[1940]: time="2025-05-13T23:42:54.047154014Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 13 23:42:54.051991 containerd[1940]: time="2025-05-13T23:42:54.051902498Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:54.061451 containerd[1940]: time="2025-05-13T23:42:54.061342358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:42:54.064783 containerd[1940]: time="2025-05-13T23:42:54.063152690Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.560707145s" May 13 23:42:54.064783 containerd[1940]: time="2025-05-13T23:42:54.063213026Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 13 23:42:59.800833 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:42:59.806538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:00.126473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:00.138723 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:43:00.222075 kubelet[2724]: E0513 23:43:00.221437 2724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:43:00.226755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:43:00.227626 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:43:00.228678 systemd[1]: kubelet.service: Consumed 290ms CPU time, 94.5M memory peak. May 13 23:43:00.894331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:00.894682 systemd[1]: kubelet.service: Consumed 290ms CPU time, 94.5M memory peak. May 13 23:43:00.898468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:00.962131 systemd[1]: Reload requested from client PID 2738 ('systemctl') (unit session-7.scope)... May 13 23:43:00.962378 systemd[1]: Reloading... May 13 23:43:01.241489 zram_generator::config[2790]: No configuration found. May 13 23:43:01.466067 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:43:01.693920 systemd[1]: Reloading finished in 730 ms. May 13 23:43:01.808358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:01.813920 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:43:01.816308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:01.816404 systemd[1]: kubelet.service: Consumed 226ms CPU time, 82.6M memory peak. May 13 23:43:01.819719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:02.122050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:02.135782 (kubelet)[2849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:43:02.210580 kubelet[2849]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:43:02.210580 kubelet[2849]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:43:02.210580 kubelet[2849]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:43:02.211103 kubelet[2849]: I0513 23:43:02.210722 2849 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:43:03.731200 kubelet[2849]: I0513 23:43:03.731124 2849 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:43:03.731200 kubelet[2849]: I0513 23:43:03.731181 2849 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:43:03.731863 kubelet[2849]: I0513 23:43:03.731694 2849 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:43:03.798179 kubelet[2849]: E0513 23:43:03.798084 2849 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.246:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.246:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:03.800072 kubelet[2849]: I0513 23:43:03.800013 2849 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:43:03.826660 kubelet[2849]: I0513 23:43:03.826617 2849 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:43:03.833440 kubelet[2849]: I0513 23:43:03.833386 2849 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:43:03.834795 kubelet[2849]: I0513 23:43:03.834746 2849 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:43:03.835121 kubelet[2849]: I0513 23:43:03.835060 2849 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:43:03.835438 kubelet[2849]: I0513 23:43:03.835114 2849 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-246","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:43:03.835625 kubelet[2849]: I0513 23:43:03.835450 2849 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:43:03.835625 kubelet[2849]: I0513 23:43:03.835471 2849 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:43:03.835723 kubelet[2849]: I0513 23:43:03.835685 2849 state_mem.go:36] "Initialized new in-memory state store" May 13 23:43:03.839384 kubelet[2849]: I0513 23:43:03.839329 2849 kubelet.go:408] "Attempting to sync node with API server" May 13 23:43:03.839384 kubelet[2849]: I0513 23:43:03.839382 2849 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:43:03.839543 kubelet[2849]: I0513 23:43:03.839437 2849 kubelet.go:314] "Adding apiserver pod source" May 13 23:43:03.839543 kubelet[2849]: I0513 23:43:03.839466 2849 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:43:03.843126 kubelet[2849]: W0513 23:43:03.842742 2849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-246&limit=500&resourceVersion=0": dial tcp 172.31.17.246:6443: connect: connection refused May 13 23:43:03.843775 kubelet[2849]: E0513 23:43:03.843386 2849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-246&limit=500&resourceVersion=0\": dial tcp 172.31.17.246:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:03.843775 kubelet[2849]: I0513 23:43:03.843565 2849 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:43:03.846484 kubelet[2849]: I0513 23:43:03.846450 2849 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:43:03.847906 kubelet[2849]: W0513 23:43:03.847879 2849 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:43:03.849818 kubelet[2849]: I0513 23:43:03.849584 2849 server.go:1269] "Started kubelet" May 13 23:43:03.855372 kubelet[2849]: W0513 23:43:03.855242 2849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.246:6443: connect: connection refused May 13 23:43:03.855526 kubelet[2849]: E0513 23:43:03.855368 2849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.246:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:03.858872 kubelet[2849]: E0513 23:43:03.855463 2849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.246:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.246:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-246.183f3ac1a71891a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-246,UID:ip-172-31-17-246,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-246,},FirstTimestamp:2025-05-13 23:43:03.849546146 +0000 UTC m=+1.707900825,LastTimestamp:2025-05-13 23:43:03.849546146 +0000 UTC m=+1.707900825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-246,}" May 13 23:43:03.858872 kubelet[2849]: I0513 23:43:03.858640 2849 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:43:03.859127 kubelet[2849]: I0513 23:43:03.858901 2849 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:43:03.859127 kubelet[2849]: I0513 23:43:03.859037 2849 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:43:03.862143 kubelet[2849]: I0513 23:43:03.862079 2849 server.go:460] "Adding debug handlers to kubelet server" May 13 23:43:03.863771 kubelet[2849]: I0513 23:43:03.863677 2849 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:43:03.864078 kubelet[2849]: I0513 23:43:03.864036 2849 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:43:03.865603 kubelet[2849]: I0513 23:43:03.865506 2849 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:43:03.869518 kubelet[2849]: I0513 23:43:03.869283 2849 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:43:03.869518 kubelet[2849]: I0513 23:43:03.869395 2849 reconciler.go:26] "Reconciler: start to sync state" May 13 23:43:03.869982 kubelet[2849]: W0513 23:43:03.869914 2849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.246:6443: connect: connection refused May 13 23:43:03.870067 kubelet[2849]: E0513 23:43:03.870001 2849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.246:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:03.870969 kubelet[2849]: E0513 23:43:03.870559 2849 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-246\" not found" May 13 23:43:03.872880 kubelet[2849]: E0513 23:43:03.872799 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-246?timeout=10s\": dial tcp 172.31.17.246:6443: connect: connection refused" interval="200ms" May 13 23:43:03.874977 kubelet[2849]: I0513 23:43:03.874327 2849 factory.go:221] Registration of the systemd container factory successfully May 13 23:43:03.874977 kubelet[2849]: I0513 23:43:03.874474 2849 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:43:03.878302 kubelet[2849]: I0513 23:43:03.878249 2849 factory.go:221] Registration of the containerd container factory successfully May 13 23:43:03.889980 kubelet[2849]: E0513 23:43:03.889920 2849 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:43:03.899932 kubelet[2849]: I0513 23:43:03.899851 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:43:03.907852 kubelet[2849]: I0513 23:43:03.907781 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:43:03.907852 kubelet[2849]: I0513 23:43:03.907842 2849 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:43:03.908041 kubelet[2849]: I0513 23:43:03.907881 2849 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:43:03.908041 kubelet[2849]: E0513 23:43:03.907975 2849 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:43:03.912332 kubelet[2849]: W0513 23:43:03.912274 2849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.246:6443: connect: connection refused May 13 23:43:03.912565 kubelet[2849]: E0513 23:43:03.912533 2849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.246:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:03.922293 kubelet[2849]: I0513 23:43:03.921894 2849 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:43:03.922293 kubelet[2849]: I0513 23:43:03.921927 2849 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:43:03.922293 kubelet[2849]: I0513 23:43:03.921959 2849 state_mem.go:36] "Initialized new in-memory state store" May 13 23:43:03.926486 kubelet[2849]: I0513 23:43:03.926436 2849 policy_none.go:49] "None policy: Start" May 13 23:43:03.928047 kubelet[2849]: I0513 23:43:03.927996 2849 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:43:03.928047 kubelet[2849]: I0513 23:43:03.928047 2849 state_mem.go:35] "Initializing new in-memory state store" May 13 23:43:03.941970 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:43:03.966185 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:43:03.971151 kubelet[2849]: E0513 23:43:03.971089 2849 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-246\" not found" May 13 23:43:03.973612 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:43:03.985129 kubelet[2849]: I0513 23:43:03.984804 2849 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:43:03.985129 kubelet[2849]: I0513 23:43:03.985109 2849 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:43:03.987370 kubelet[2849]: I0513 23:43:03.985129 2849 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:43:03.987370 kubelet[2849]: I0513 23:43:03.985826 2849 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:43:03.991289 kubelet[2849]: E0513 23:43:03.991004 2849 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-246\" not found" May 13 23:43:04.025984 systemd[1]: Created slice kubepods-burstable-pod57f971a93808911126aedd99ae4859fa.slice - libcontainer container kubepods-burstable-pod57f971a93808911126aedd99ae4859fa.slice. May 13 23:43:04.046736 systemd[1]: Created slice kubepods-burstable-pod0e9d1fc053e3fb6d3f002bf4718aa3d4.slice - libcontainer container kubepods-burstable-pod0e9d1fc053e3fb6d3f002bf4718aa3d4.slice. May 13 23:43:04.067557 systemd[1]: Created slice kubepods-burstable-podba21f3e1fac6027b58b3375205dc1e8e.slice - libcontainer container kubepods-burstable-podba21f3e1fac6027b58b3375205dc1e8e.slice. May 13 23:43:04.071298 kubelet[2849]: I0513 23:43:04.070775 2849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e9d1fc053e3fb6d3f002bf4718aa3d4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-246\" (UID: \"0e9d1fc053e3fb6d3f002bf4718aa3d4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-246" May 13 23:43:04.071298 kubelet[2849]: I0513 23:43:04.070870 2849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e9d1fc053e3fb6d3f002bf4718aa3d4-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-246\" (UID: \"0e9d1fc053e3fb6d3f002bf4718aa3d4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-246" May 13 23:43:04.071298 kubelet[2849]: I0513 23:43:04.070939 2849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0e9d1fc053e3fb6d3f002bf4718aa3d4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-246\" (UID: \"0e9d1fc053e3fb6d3f002bf4718aa3d4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-246" May 13 23:43:04.071298 kubelet[2849]: I0513 23:43:04.071007 2849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e9d1fc053e3fb6d3f002bf4718aa3d4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-246\" (UID: \"0e9d1fc053e3fb6d3f002bf4718aa3d4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-246" May 13 23:43:04.071298 kubelet[2849]: I0513 23:43:04.071048 2849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e9d1fc053e3fb6d3f002bf4718aa3d4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-246\" (UID: \"0e9d1fc053e3fb6d3f002bf4718aa3d4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-246" May 13 23:43:04.071600 kubelet[2849]: I0513 23:43:04.071116 2849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba21f3e1fac6027b58b3375205dc1e8e-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-246\" (UID: \"ba21f3e1fac6027b58b3375205dc1e8e\") " pod="kube-system/kube-scheduler-ip-172-31-17-246" May 13 23:43:04.071600 kubelet[2849]: I0513 23:43:04.071179 2849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57f971a93808911126aedd99ae4859fa-ca-certs\") pod \"kube-apiserver-ip-172-31-17-246\" (UID: \"57f971a93808911126aedd99ae4859fa\") " pod="kube-system/kube-apiserver-ip-172-31-17-246" May 13 23:43:04.071600 kubelet[2849]: I0513 23:43:04.071221 2849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57f971a93808911126aedd99ae4859fa-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-246\" (UID: \"57f971a93808911126aedd99ae4859fa\") " pod="kube-system/kube-apiserver-ip-172-31-17-246" May 13 23:43:04.071600 kubelet[2849]: I0513 23:43:04.071290 2849 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57f971a93808911126aedd99ae4859fa-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-246\" (UID: \"57f971a93808911126aedd99ae4859fa\") " pod="kube-system/kube-apiserver-ip-172-31-17-246" May 13 23:43:04.073439 kubelet[2849]: E0513 23:43:04.073343 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-246?timeout=10s\": dial tcp 172.31.17.246:6443: connect: connection refused" interval="400ms" May 13 23:43:04.088006 kubelet[2849]: I0513 23:43:04.087946 2849 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-246" May 13 23:43:04.088548 kubelet[2849]: E0513 23:43:04.088490 2849 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.246:6443/api/v1/nodes\": dial tcp 172.31.17.246:6443: connect: connection refused" node="ip-172-31-17-246" May 13 23:43:04.291498 kubelet[2849]: I0513 23:43:04.291034 2849 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-246" May 13 23:43:04.291606 kubelet[2849]: E0513 23:43:04.291536 2849 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.246:6443/api/v1/nodes\": dial tcp 172.31.17.246:6443: connect: connection refused" node="ip-172-31-17-246" May 13 23:43:04.343015 containerd[1940]: time="2025-05-13T23:43:04.342940681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-246,Uid:57f971a93808911126aedd99ae4859fa,Namespace:kube-system,Attempt:0,}" May 13 23:43:04.364438 containerd[1940]: time="2025-05-13T23:43:04.364366321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-246,Uid:0e9d1fc053e3fb6d3f002bf4718aa3d4,Namespace:kube-system,Attempt:0,}" May 13 23:43:04.372764 containerd[1940]: time="2025-05-13T23:43:04.372696001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-246,Uid:ba21f3e1fac6027b58b3375205dc1e8e,Namespace:kube-system,Attempt:0,}" May 13 23:43:04.387212 containerd[1940]: time="2025-05-13T23:43:04.386884429Z" level=info msg="connecting to shim 7c86a28a2713bbd5572c1b745db75f18d3a170e5d2f6f2c72e3e6b7545a7bc54" address="unix:///run/containerd/s/ea05590fc649d7720b87b6653855e2c880db10ee54834d87e5e8d52e10b5c42e" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:04.439541 systemd[1]: Started cri-containerd-7c86a28a2713bbd5572c1b745db75f18d3a170e5d2f6f2c72e3e6b7545a7bc54.scope - libcontainer container 7c86a28a2713bbd5572c1b745db75f18d3a170e5d2f6f2c72e3e6b7545a7bc54. May 13 23:43:04.474600 kubelet[2849]: E0513 23:43:04.474521 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-246?timeout=10s\": dial tcp 172.31.17.246:6443: connect: connection refused" interval="800ms" May 13 23:43:04.477192 containerd[1940]: time="2025-05-13T23:43:04.477106405Z" level=info msg="connecting to shim 708a7d7070379cd0eab1326421406f5dd6819c383d24d46df1f5664bbd503fc2" address="unix:///run/containerd/s/af3a7e94d2b6137f615dd0b935c850450efd2e23126250e27e14f5416657f05b" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:04.497896 containerd[1940]: time="2025-05-13T23:43:04.494014814Z" level=info msg="connecting to shim c995bbec20f3b7f3aa171704449013b05541d914cdfbe638d4499ac17580e9ac" address="unix:///run/containerd/s/24a408e62561b42b1a26a8e1df5323f8c673fe1e31eb193f55600c74852cb531" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:04.555612 systemd[1]: Started cri-containerd-708a7d7070379cd0eab1326421406f5dd6819c383d24d46df1f5664bbd503fc2.scope - libcontainer container 708a7d7070379cd0eab1326421406f5dd6819c383d24d46df1f5664bbd503fc2. May 13 23:43:04.582565 systemd[1]: Started cri-containerd-c995bbec20f3b7f3aa171704449013b05541d914cdfbe638d4499ac17580e9ac.scope - libcontainer container c995bbec20f3b7f3aa171704449013b05541d914cdfbe638d4499ac17580e9ac. May 13 23:43:04.615696 containerd[1940]: time="2025-05-13T23:43:04.615591806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-246,Uid:57f971a93808911126aedd99ae4859fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c86a28a2713bbd5572c1b745db75f18d3a170e5d2f6f2c72e3e6b7545a7bc54\"" May 13 23:43:04.623296 containerd[1940]: time="2025-05-13T23:43:04.623211626Z" level=info msg="CreateContainer within sandbox \"7c86a28a2713bbd5572c1b745db75f18d3a170e5d2f6f2c72e3e6b7545a7bc54\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:43:04.656755 containerd[1940]: time="2025-05-13T23:43:04.655905878Z" level=info msg="Container cf582d6cd526902dcaaa98ae27f8c468395fbdc45e931ea93acbcfef511fce85: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:04.656755 containerd[1940]: time="2025-05-13T23:43:04.656609174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-246,Uid:0e9d1fc053e3fb6d3f002bf4718aa3d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"708a7d7070379cd0eab1326421406f5dd6819c383d24d46df1f5664bbd503fc2\"" May 13 23:43:04.662167 containerd[1940]: time="2025-05-13T23:43:04.662104394Z" level=info msg="CreateContainer within sandbox \"708a7d7070379cd0eab1326421406f5dd6819c383d24d46df1f5664bbd503fc2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:43:04.688883 containerd[1940]: time="2025-05-13T23:43:04.688792203Z" level=info msg="CreateContainer within sandbox \"7c86a28a2713bbd5572c1b745db75f18d3a170e5d2f6f2c72e3e6b7545a7bc54\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf582d6cd526902dcaaa98ae27f8c468395fbdc45e931ea93acbcfef511fce85\"" May 13 23:43:04.700015 containerd[1940]: time="2025-05-13T23:43:04.699942195Z" level=info msg="StartContainer for \"cf582d6cd526902dcaaa98ae27f8c468395fbdc45e931ea93acbcfef511fce85\"" May 13 23:43:04.701210 kubelet[2849]: I0513 23:43:04.700664 2849 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-246" May 13 23:43:04.701210 kubelet[2849]: E0513 23:43:04.701127 2849 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.246:6443/api/v1/nodes\": dial tcp 172.31.17.246:6443: connect: connection refused" node="ip-172-31-17-246" May 13 23:43:04.702253 containerd[1940]: time="2025-05-13T23:43:04.702161823Z" level=info msg="connecting to shim cf582d6cd526902dcaaa98ae27f8c468395fbdc45e931ea93acbcfef511fce85" address="unix:///run/containerd/s/ea05590fc649d7720b87b6653855e2c880db10ee54834d87e5e8d52e10b5c42e" protocol=ttrpc version=3 May 13 23:43:04.718123 containerd[1940]: time="2025-05-13T23:43:04.718053459Z" level=info msg="Container 5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:04.747203 containerd[1940]: time="2025-05-13T23:43:04.747146211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-246,Uid:ba21f3e1fac6027b58b3375205dc1e8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c995bbec20f3b7f3aa171704449013b05541d914cdfbe638d4499ac17580e9ac\"" May 13 23:43:04.753911 containerd[1940]: time="2025-05-13T23:43:04.753132375Z" level=info msg="CreateContainer within sandbox \"c995bbec20f3b7f3aa171704449013b05541d914cdfbe638d4499ac17580e9ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:43:04.768431 systemd[1]: Started cri-containerd-cf582d6cd526902dcaaa98ae27f8c468395fbdc45e931ea93acbcfef511fce85.scope - libcontainer container cf582d6cd526902dcaaa98ae27f8c468395fbdc45e931ea93acbcfef511fce85. May 13 23:43:04.769164 containerd[1940]: time="2025-05-13T23:43:04.769112331Z" level=info msg="CreateContainer within sandbox \"708a7d7070379cd0eab1326421406f5dd6819c383d24d46df1f5664bbd503fc2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89\"" May 13 23:43:04.770976 containerd[1940]: time="2025-05-13T23:43:04.770913351Z" level=info msg="StartContainer for \"5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89\"" May 13 23:43:04.775640 containerd[1940]: time="2025-05-13T23:43:04.775545111Z" level=info msg="connecting to shim 5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89" address="unix:///run/containerd/s/af3a7e94d2b6137f615dd0b935c850450efd2e23126250e27e14f5416657f05b" protocol=ttrpc version=3 May 13 23:43:04.790325 containerd[1940]: time="2025-05-13T23:43:04.790258143Z" level=info msg="Container 5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:04.826782 containerd[1940]: time="2025-05-13T23:43:04.825846207Z" level=info msg="CreateContainer within sandbox \"c995bbec20f3b7f3aa171704449013b05541d914cdfbe638d4499ac17580e9ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1\"" May 13 23:43:04.829843 containerd[1940]: time="2025-05-13T23:43:04.829050939Z" level=info msg="StartContainer for \"5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1\"" May 13 23:43:04.837656 containerd[1940]: time="2025-05-13T23:43:04.836095779Z" level=info msg="connecting to shim 5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1" address="unix:///run/containerd/s/24a408e62561b42b1a26a8e1df5323f8c673fe1e31eb193f55600c74852cb531" protocol=ttrpc version=3 May 13 23:43:04.861923 systemd[1]: Started cri-containerd-5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89.scope - libcontainer container 5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89. May 13 23:43:04.894722 systemd[1]: Started cri-containerd-5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1.scope - libcontainer container 5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1. May 13 23:43:04.945035 containerd[1940]: time="2025-05-13T23:43:04.944588452Z" level=info msg="StartContainer for \"cf582d6cd526902dcaaa98ae27f8c468395fbdc45e931ea93acbcfef511fce85\" returns successfully" May 13 23:43:04.972568 kubelet[2849]: W0513 23:43:04.972469 2849 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.246:6443: connect: connection refused May 13 23:43:04.973086 kubelet[2849]: E0513 23:43:04.972575 2849 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.246:6443: connect: connection refused" logger="UnhandledError" May 13 23:43:05.037921 containerd[1940]: time="2025-05-13T23:43:05.037858884Z" level=info msg="StartContainer for \"5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89\" returns successfully" May 13 23:43:05.070670 containerd[1940]: time="2025-05-13T23:43:05.070552020Z" level=info msg="StartContainer for \"5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1\" returns successfully" May 13 23:43:05.504692 kubelet[2849]: I0513 23:43:05.504647 2849 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-246" May 13 23:43:05.685874 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 13 23:43:08.236419 kubelet[2849]: E0513 23:43:08.236355 2849 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-246\" not found" node="ip-172-31-17-246" May 13 23:43:08.420686 kubelet[2849]: E0513 23:43:08.420483 2849 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-17-246.183f3ac1a71891a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-246,UID:ip-172-31-17-246,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-246,},FirstTimestamp:2025-05-13 23:43:03.849546146 +0000 UTC m=+1.707900825,LastTimestamp:2025-05-13 23:43:03.849546146 +0000 UTC m=+1.707900825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-246,}" May 13 23:43:08.452760 kubelet[2849]: I0513 23:43:08.452700 2849 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-246" May 13 23:43:08.854824 kubelet[2849]: I0513 23:43:08.854772 2849 apiserver.go:52] "Watching apiserver" May 13 23:43:08.870107 kubelet[2849]: I0513 23:43:08.870046 2849 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:43:08.986732 kubelet[2849]: E0513 23:43:08.986671 2849 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-246\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-246" May 13 23:43:10.623163 systemd[1]: Reload requested from client PID 3124 ('systemctl') (unit session-7.scope)... May 13 23:43:10.623641 systemd[1]: Reloading... May 13 23:43:10.827285 zram_generator::config[3181]: No configuration found. May 13 23:43:11.090475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:43:11.364397 systemd[1]: Reloading finished in 740 ms. May 13 23:43:11.420773 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:11.434296 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:43:11.436317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:11.436706 systemd[1]: kubelet.service: Consumed 2.409s CPU time, 117.5M memory peak. May 13 23:43:11.442028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:43:11.801396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:43:11.820881 (kubelet)[3229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:43:11.908439 kubelet[3229]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:43:11.908439 kubelet[3229]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:43:11.908439 kubelet[3229]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:43:11.908957 kubelet[3229]: I0513 23:43:11.908574 3229 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:43:11.919450 kubelet[3229]: I0513 23:43:11.919390 3229 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:43:11.919450 kubelet[3229]: I0513 23:43:11.919438 3229 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:43:11.920267 kubelet[3229]: I0513 23:43:11.919890 3229 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:43:11.924965 kubelet[3229]: I0513 23:43:11.924670 3229 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:43:11.929407 kubelet[3229]: I0513 23:43:11.929362 3229 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:43:11.948771 kubelet[3229]: I0513 23:43:11.948702 3229 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:43:11.954085 kubelet[3229]: I0513 23:43:11.954026 3229 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:43:11.954974 kubelet[3229]: I0513 23:43:11.954251 3229 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:43:11.954974 kubelet[3229]: I0513 23:43:11.954502 3229 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:43:11.954974 kubelet[3229]: I0513 23:43:11.954537 3229 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-246","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:43:11.954974 kubelet[3229]: I0513 23:43:11.954828 3229 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:43:11.956927 kubelet[3229]: I0513 23:43:11.954847 3229 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:43:11.956927 kubelet[3229]: I0513 23:43:11.954899 3229 state_mem.go:36] "Initialized new in-memory state store" May 13 23:43:11.956927 kubelet[3229]: I0513 23:43:11.955090 3229 kubelet.go:408] "Attempting to sync node with API server" May 13 23:43:11.956927 kubelet[3229]: I0513 23:43:11.955113 3229 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:43:11.956927 kubelet[3229]: I0513 23:43:11.955162 3229 kubelet.go:314] "Adding apiserver pod source" May 13 23:43:11.956927 kubelet[3229]: I0513 23:43:11.955190 3229 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:43:11.961282 kubelet[3229]: I0513 23:43:11.960212 3229 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:43:11.961282 kubelet[3229]: I0513 23:43:11.961029 3229 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:43:11.962360 kubelet[3229]: I0513 23:43:11.962312 3229 server.go:1269] "Started kubelet" May 13 23:43:11.966141 kubelet[3229]: I0513 23:43:11.966054 3229 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:43:11.968256 kubelet[3229]: I0513 23:43:11.968095 3229 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:43:11.968639 kubelet[3229]: I0513 23:43:11.968602 3229 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:43:11.969318 kubelet[3229]: I0513 23:43:11.969292 3229 server.go:460] "Adding debug handlers to kubelet server" May 13 23:43:11.973528 kubelet[3229]: I0513 23:43:11.973496 3229 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:43:11.975843 kubelet[3229]: I0513 23:43:11.975783 3229 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:43:11.983820 kubelet[3229]: I0513 23:43:11.983784 3229 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:43:11.984506 kubelet[3229]: E0513 23:43:11.984472 3229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-246\" not found" May 13 23:43:11.987755 kubelet[3229]: I0513 23:43:11.987718 3229 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:43:11.988177 kubelet[3229]: I0513 23:43:11.988156 3229 reconciler.go:26] "Reconciler: start to sync state" May 13 23:43:11.991685 kubelet[3229]: I0513 23:43:11.991637 3229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:43:12.008341 kubelet[3229]: I0513 23:43:12.008301 3229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:43:12.008998 kubelet[3229]: I0513 23:43:12.008483 3229 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:43:12.008998 kubelet[3229]: I0513 23:43:12.008524 3229 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:43:12.008998 kubelet[3229]: E0513 23:43:12.008627 3229 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:43:12.062363 kubelet[3229]: I0513 23:43:12.061101 3229 factory.go:221] Registration of the systemd container factory successfully May 13 23:43:12.062363 kubelet[3229]: I0513 23:43:12.061772 3229 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:43:12.074744 kubelet[3229]: E0513 23:43:12.074685 3229 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:43:12.075810 kubelet[3229]: I0513 23:43:12.075152 3229 factory.go:221] Registration of the containerd container factory successfully May 13 23:43:12.113511 kubelet[3229]: E0513 23:43:12.113392 3229 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:43:12.197772 kubelet[3229]: I0513 23:43:12.197716 3229 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:43:12.197772 kubelet[3229]: I0513 23:43:12.197753 3229 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:43:12.198362 kubelet[3229]: I0513 23:43:12.197791 3229 state_mem.go:36] "Initialized new in-memory state store" May 13 23:43:12.198362 kubelet[3229]: I0513 23:43:12.198041 3229 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:43:12.198362 kubelet[3229]: I0513 23:43:12.198066 3229 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:43:12.198362 kubelet[3229]: I0513 23:43:12.198100 3229 policy_none.go:49] "None policy: Start" May 13 23:43:12.200772 kubelet[3229]: I0513 23:43:12.200337 3229 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:43:12.200772 kubelet[3229]: I0513 23:43:12.200405 3229 state_mem.go:35] "Initializing new in-memory state store" May 13 23:43:12.200772 kubelet[3229]: I0513 23:43:12.200766 3229 state_mem.go:75] "Updated machine memory state" May 13 23:43:12.212158 kubelet[3229]: I0513 23:43:12.211823 3229 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:43:12.212158 kubelet[3229]: I0513 23:43:12.212133 3229 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:43:12.214201 kubelet[3229]: I0513 23:43:12.212378 3229 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:43:12.214201 kubelet[3229]: I0513 23:43:12.212725 3229 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:43:12.329200 kubelet[3229]: E0513 23:43:12.328919 3229 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-17-246\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-246" May 13 23:43:12.332277 kubelet[3229]: I0513 23:43:12.332206 3229 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-246" May 13 23:43:12.349971 kubelet[3229]: I0513 23:43:12.348214 3229 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-17-246" May 13 23:43:12.349971 kubelet[3229]: I0513 23:43:12.348357 3229 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-246" May 13 23:43:12.391336 kubelet[3229]: I0513 23:43:12.391293 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e9d1fc053e3fb6d3f002bf4718aa3d4-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-246\" (UID: \"0e9d1fc053e3fb6d3f002bf4718aa3d4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-246" May 13 23:43:12.391894 kubelet[3229]: I0513 23:43:12.391425 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e9d1fc053e3fb6d3f002bf4718aa3d4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-246\" (UID: \"0e9d1fc053e3fb6d3f002bf4718aa3d4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-246" May 13 23:43:12.391894 kubelet[3229]: I0513 23:43:12.391467 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba21f3e1fac6027b58b3375205dc1e8e-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-246\" (UID: \"ba21f3e1fac6027b58b3375205dc1e8e\") " pod="kube-system/kube-scheduler-ip-172-31-17-246" May 13 23:43:12.391894 kubelet[3229]: I0513 23:43:12.391503 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57f971a93808911126aedd99ae4859fa-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-246\" (UID: \"57f971a93808911126aedd99ae4859fa\") " pod="kube-system/kube-apiserver-ip-172-31-17-246" May 13 23:43:12.391894 kubelet[3229]: I0513 23:43:12.391542 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0e9d1fc053e3fb6d3f002bf4718aa3d4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-246\" (UID: \"0e9d1fc053e3fb6d3f002bf4718aa3d4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-246" May 13 23:43:12.391894 kubelet[3229]: I0513 23:43:12.391584 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e9d1fc053e3fb6d3f002bf4718aa3d4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-246\" (UID: \"0e9d1fc053e3fb6d3f002bf4718aa3d4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-246" May 13 23:43:12.392357 kubelet[3229]: I0513 23:43:12.391623 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e9d1fc053e3fb6d3f002bf4718aa3d4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-246\" (UID: \"0e9d1fc053e3fb6d3f002bf4718aa3d4\") " pod="kube-system/kube-controller-manager-ip-172-31-17-246" May 13 23:43:12.392357 kubelet[3229]: I0513 23:43:12.391660 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57f971a93808911126aedd99ae4859fa-ca-certs\") pod \"kube-apiserver-ip-172-31-17-246\" (UID: \"57f971a93808911126aedd99ae4859fa\") " pod="kube-system/kube-apiserver-ip-172-31-17-246" May 13 23:43:12.392357 kubelet[3229]: I0513 23:43:12.391695 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57f971a93808911126aedd99ae4859fa-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-246\" (UID: \"57f971a93808911126aedd99ae4859fa\") " pod="kube-system/kube-apiserver-ip-172-31-17-246" May 13 23:43:12.955771 kubelet[3229]: I0513 23:43:12.955703 3229 apiserver.go:52] "Watching apiserver" May 13 23:43:12.988359 kubelet[3229]: I0513 23:43:12.988274 3229 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:43:13.168880 kubelet[3229]: E0513 23:43:13.168064 3229 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-17-246\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-246" May 13 23:43:13.168880 kubelet[3229]: E0513 23:43:13.168438 3229 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-17-246\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-246" May 13 23:43:13.168880 kubelet[3229]: E0513 23:43:13.168672 3229 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-246\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-246" May 13 23:43:13.312750 kubelet[3229]: I0513 23:43:13.312611 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-246" podStartSLOduration=3.312588501 podStartE2EDuration="3.312588501s" podCreationTimestamp="2025-05-13 23:43:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:13.257143917 +0000 UTC m=+1.425746840" watchObservedRunningTime="2025-05-13 23:43:13.312588501 +0000 UTC m=+1.481191412" May 13 23:43:13.362123 kubelet[3229]: I0513 23:43:13.361887 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-246" podStartSLOduration=1.361865074 podStartE2EDuration="1.361865074s" podCreationTimestamp="2025-05-13 23:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:13.316765641 +0000 UTC m=+1.485368660" watchObservedRunningTime="2025-05-13 23:43:13.361865074 +0000 UTC m=+1.530467985" May 13 23:43:15.766074 kubelet[3229]: I0513 23:43:15.765858 3229 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:43:15.768001 containerd[1940]: time="2025-05-13T23:43:15.767095034Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:43:15.768965 kubelet[3229]: I0513 23:43:15.767420 3229 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:43:15.826355 kubelet[3229]: I0513 23:43:15.825033 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-246" podStartSLOduration=3.825013898 podStartE2EDuration="3.825013898s" podCreationTimestamp="2025-05-13 23:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:13.366607606 +0000 UTC m=+1.535210553" watchObservedRunningTime="2025-05-13 23:43:15.825013898 +0000 UTC m=+3.993616809" May 13 23:43:16.487205 kubelet[3229]: W0513 23:43:16.487146 3229 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-17-246" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-246' and this object May 13 23:43:16.487387 kubelet[3229]: E0513 23:43:16.487214 3229 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-17-246\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-246' and this object" logger="UnhandledError" May 13 23:43:16.487387 kubelet[3229]: W0513 23:43:16.487318 3229 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-17-246" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-246' and this object May 13 23:43:16.487387 kubelet[3229]: E0513 23:43:16.487346 3229 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-17-246\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-246' and this object" logger="UnhandledError" May 13 23:43:16.505856 systemd[1]: Created slice kubepods-besteffort-pod8ec9a818_24f0_4d72_a4db_2a23d423228c.slice - libcontainer container kubepods-besteffort-pod8ec9a818_24f0_4d72_a4db_2a23d423228c.slice. May 13 23:43:16.523065 kubelet[3229]: I0513 23:43:16.522819 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ec9a818-24f0-4d72-a4db-2a23d423228c-lib-modules\") pod \"kube-proxy-rv4fg\" (UID: \"8ec9a818-24f0-4d72-a4db-2a23d423228c\") " pod="kube-system/kube-proxy-rv4fg" May 13 23:43:16.523065 kubelet[3229]: I0513 23:43:16.522884 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ec9a818-24f0-4d72-a4db-2a23d423228c-kube-proxy\") pod \"kube-proxy-rv4fg\" (UID: \"8ec9a818-24f0-4d72-a4db-2a23d423228c\") " pod="kube-system/kube-proxy-rv4fg" May 13 23:43:16.523065 kubelet[3229]: I0513 23:43:16.522921 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ec9a818-24f0-4d72-a4db-2a23d423228c-xtables-lock\") pod \"kube-proxy-rv4fg\" (UID: \"8ec9a818-24f0-4d72-a4db-2a23d423228c\") " pod="kube-system/kube-proxy-rv4fg" May 13 23:43:16.523065 kubelet[3229]: I0513 23:43:16.522956 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22p45\" (UniqueName: \"kubernetes.io/projected/8ec9a818-24f0-4d72-a4db-2a23d423228c-kube-api-access-22p45\") pod \"kube-proxy-rv4fg\" (UID: \"8ec9a818-24f0-4d72-a4db-2a23d423228c\") " pod="kube-system/kube-proxy-rv4fg" May 13 23:43:16.646660 systemd[1]: Created slice kubepods-besteffort-pod9f72a57e_854d_48c5_a778_d66e3cc9637e.slice - libcontainer container kubepods-besteffort-pod9f72a57e_854d_48c5_a778_d66e3cc9637e.slice. May 13 23:43:16.661275 kubelet[3229]: W0513 23:43:16.661062 3229 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-17-246" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-17-246' and this object May 13 23:43:16.661275 kubelet[3229]: E0513 23:43:16.661141 3229 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-17-246\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-17-246' and this object" logger="UnhandledError" May 13 23:43:16.661275 kubelet[3229]: W0513 23:43:16.661062 3229 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-17-246" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-17-246' and this object May 13 23:43:16.661275 kubelet[3229]: E0513 23:43:16.661210 3229 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ip-172-31-17-246\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-17-246' and this object" logger="UnhandledError" May 13 23:43:16.723910 kubelet[3229]: I0513 23:43:16.723837 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q829\" (UniqueName: \"kubernetes.io/projected/9f72a57e-854d-48c5-a778-d66e3cc9637e-kube-api-access-9q829\") pod \"tigera-operator-6f6897fdc5-l5q25\" (UID: \"9f72a57e-854d-48c5-a778-d66e3cc9637e\") " pod="tigera-operator/tigera-operator-6f6897fdc5-l5q25" May 13 23:43:16.723910 kubelet[3229]: I0513 23:43:16.723914 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9f72a57e-854d-48c5-a778-d66e3cc9637e-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-l5q25\" (UID: \"9f72a57e-854d-48c5-a778-d66e3cc9637e\") " pod="tigera-operator/tigera-operator-6f6897fdc5-l5q25" May 13 23:43:17.555562 containerd[1940]: time="2025-05-13T23:43:17.555476078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-l5q25,Uid:9f72a57e-854d-48c5-a778-d66e3cc9637e,Namespace:tigera-operator,Attempt:0,}" May 13 23:43:17.601733 containerd[1940]: time="2025-05-13T23:43:17.601669947Z" level=info msg="connecting to shim f26e0a8667c265e63d6e1af373330cf16b0e164875945d958127af3fe4e5a45d" address="unix:///run/containerd/s/11e8d71fcfcf06fb4a1aaa1804dd6c782cac4391e4d1eb90ed204b1b58c6002b" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:17.645555 systemd[1]: Started cri-containerd-f26e0a8667c265e63d6e1af373330cf16b0e164875945d958127af3fe4e5a45d.scope - libcontainer container f26e0a8667c265e63d6e1af373330cf16b0e164875945d958127af3fe4e5a45d. May 13 23:43:17.715280 containerd[1940]: time="2025-05-13T23:43:17.715145355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-l5q25,Uid:9f72a57e-854d-48c5-a778-d66e3cc9637e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f26e0a8667c265e63d6e1af373330cf16b0e164875945d958127af3fe4e5a45d\"" May 13 23:43:17.719256 containerd[1940]: time="2025-05-13T23:43:17.718834107Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 23:43:17.733291 kubelet[3229]: E0513 23:43:17.732833 3229 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 13 23:43:17.733291 kubelet[3229]: E0513 23:43:17.732886 3229 projected.go:194] Error preparing data for projected volume kube-api-access-22p45 for pod kube-system/kube-proxy-rv4fg: failed to sync configmap cache: timed out waiting for the condition May 13 23:43:17.733291 kubelet[3229]: E0513 23:43:17.732972 3229 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ec9a818-24f0-4d72-a4db-2a23d423228c-kube-api-access-22p45 podName:8ec9a818-24f0-4d72-a4db-2a23d423228c nodeName:}" failed. No retries permitted until 2025-05-13 23:43:18.232941607 +0000 UTC m=+6.401544506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-22p45" (UniqueName: "kubernetes.io/projected/8ec9a818-24f0-4d72-a4db-2a23d423228c-kube-api-access-22p45") pod "kube-proxy-rv4fg" (UID: "8ec9a818-24f0-4d72-a4db-2a23d423228c") : failed to sync configmap cache: timed out waiting for the condition May 13 23:43:18.325342 containerd[1940]: time="2025-05-13T23:43:18.325270478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rv4fg,Uid:8ec9a818-24f0-4d72-a4db-2a23d423228c,Namespace:kube-system,Attempt:0,}" May 13 23:43:18.376817 containerd[1940]: time="2025-05-13T23:43:18.376697198Z" level=info msg="connecting to shim d1e56a14461c7454345f1060313797f1b25a63b40b30643e104c37937099905c" address="unix:///run/containerd/s/83c58bd7dd1f02daac492f1518501e9e001e30e8186346e457fd72bef40a6fc5" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:18.423584 systemd[1]: Started cri-containerd-d1e56a14461c7454345f1060313797f1b25a63b40b30643e104c37937099905c.scope - libcontainer container d1e56a14461c7454345f1060313797f1b25a63b40b30643e104c37937099905c. May 13 23:43:18.480114 containerd[1940]: time="2025-05-13T23:43:18.480037671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rv4fg,Uid:8ec9a818-24f0-4d72-a4db-2a23d423228c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1e56a14461c7454345f1060313797f1b25a63b40b30643e104c37937099905c\"" May 13 23:43:18.484292 containerd[1940]: time="2025-05-13T23:43:18.484189011Z" level=info msg="CreateContainer within sandbox \"d1e56a14461c7454345f1060313797f1b25a63b40b30643e104c37937099905c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:43:18.504251 containerd[1940]: time="2025-05-13T23:43:18.503633967Z" level=info msg="Container 589e38d19e2577cc1a1a099d367a593ad2b017097dfb097787f37ed957ec0ff0: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:18.528843 containerd[1940]: time="2025-05-13T23:43:18.528769227Z" level=info msg="CreateContainer within sandbox \"d1e56a14461c7454345f1060313797f1b25a63b40b30643e104c37937099905c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"589e38d19e2577cc1a1a099d367a593ad2b017097dfb097787f37ed957ec0ff0\"" May 13 23:43:18.532337 containerd[1940]: time="2025-05-13T23:43:18.532261515Z" level=info msg="StartContainer for \"589e38d19e2577cc1a1a099d367a593ad2b017097dfb097787f37ed957ec0ff0\"" May 13 23:43:18.540593 containerd[1940]: time="2025-05-13T23:43:18.540516339Z" level=info msg="connecting to shim 589e38d19e2577cc1a1a099d367a593ad2b017097dfb097787f37ed957ec0ff0" address="unix:///run/containerd/s/83c58bd7dd1f02daac492f1518501e9e001e30e8186346e457fd72bef40a6fc5" protocol=ttrpc version=3 May 13 23:43:18.582682 systemd[1]: Started cri-containerd-589e38d19e2577cc1a1a099d367a593ad2b017097dfb097787f37ed957ec0ff0.scope - libcontainer container 589e38d19e2577cc1a1a099d367a593ad2b017097dfb097787f37ed957ec0ff0. May 13 23:43:18.683413 containerd[1940]: time="2025-05-13T23:43:18.683353972Z" level=info msg="StartContainer for \"589e38d19e2577cc1a1a099d367a593ad2b017097dfb097787f37ed957ec0ff0\" returns successfully" May 13 23:43:19.251721 sudo[2289]: pam_unix(sudo:session): session closed for user root May 13 23:43:19.279834 sshd[2288]: Connection closed by 139.178.89.65 port 39502 May 13 23:43:19.281694 sshd-session[2286]: pam_unix(sshd:session): session closed for user core May 13 23:43:19.295772 systemd[1]: sshd@6-172.31.17.246:22-139.178.89.65:39502.service: Deactivated successfully. May 13 23:43:19.301069 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:43:19.301525 systemd[1]: session-7.scope: Consumed 10.078s CPU time, 227.7M memory peak. May 13 23:43:19.308580 systemd-logind[1930]: Session 7 logged out. Waiting for processes to exit. May 13 23:43:19.311427 systemd-logind[1930]: Removed session 7. May 13 23:43:19.324588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2516511224.mount: Deactivated successfully. May 13 23:43:20.001917 containerd[1940]: time="2025-05-13T23:43:20.001865655Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:20.003859 containerd[1940]: time="2025-05-13T23:43:20.003767079Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 13 23:43:20.006367 containerd[1940]: time="2025-05-13T23:43:20.006320523Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:20.013073 containerd[1940]: time="2025-05-13T23:43:20.011438919Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:20.013073 containerd[1940]: time="2025-05-13T23:43:20.012830139Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.293930824s" May 13 23:43:20.013073 containerd[1940]: time="2025-05-13T23:43:20.012872355Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 13 23:43:20.020389 containerd[1940]: time="2025-05-13T23:43:20.020191227Z" level=info msg="CreateContainer within sandbox \"f26e0a8667c265e63d6e1af373330cf16b0e164875945d958127af3fe4e5a45d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 23:43:20.037190 containerd[1940]: time="2025-05-13T23:43:20.037123851Z" level=info msg="Container f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:20.046541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2049775137.mount: Deactivated successfully. May 13 23:43:20.053492 containerd[1940]: time="2025-05-13T23:43:20.053414811Z" level=info msg="CreateContainer within sandbox \"f26e0a8667c265e63d6e1af373330cf16b0e164875945d958127af3fe4e5a45d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d\"" May 13 23:43:20.056193 containerd[1940]: time="2025-05-13T23:43:20.054392439Z" level=info msg="StartContainer for \"f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d\"" May 13 23:43:20.056193 containerd[1940]: time="2025-05-13T23:43:20.055984083Z" level=info msg="connecting to shim f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d" address="unix:///run/containerd/s/11e8d71fcfcf06fb4a1aaa1804dd6c782cac4391e4d1eb90ed204b1b58c6002b" protocol=ttrpc version=3 May 13 23:43:20.095530 systemd[1]: Started cri-containerd-f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d.scope - libcontainer container f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d. May 13 23:43:20.153481 containerd[1940]: time="2025-05-13T23:43:20.153080619Z" level=info msg="StartContainer for \"f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d\" returns successfully" May 13 23:43:20.165380 update_engine[1931]: I20250513 23:43:20.165271 1931 update_attempter.cc:509] Updating boot flags... May 13 23:43:20.258268 kubelet[3229]: I0513 23:43:20.255990 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rv4fg" podStartSLOduration=4.255965668 podStartE2EDuration="4.255965668s" podCreationTimestamp="2025-05-13 23:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:19.248929335 +0000 UTC m=+7.417532258" watchObservedRunningTime="2025-05-13 23:43:20.255965668 +0000 UTC m=+8.424568579" May 13 23:43:20.258268 kubelet[3229]: I0513 23:43:20.256194 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-l5q25" podStartSLOduration=1.9579036159999998 podStartE2EDuration="4.25618264s" podCreationTimestamp="2025-05-13 23:43:16 +0000 UTC" firstStartedPulling="2025-05-13 23:43:17.718091799 +0000 UTC m=+5.886694710" lastFinishedPulling="2025-05-13 23:43:20.016370823 +0000 UTC m=+8.184973734" observedRunningTime="2025-05-13 23:43:20.254472976 +0000 UTC m=+8.423075911" watchObservedRunningTime="2025-05-13 23:43:20.25618264 +0000 UTC m=+8.424785575" May 13 23:43:20.294148 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3612) May 13 23:43:26.333104 systemd[1]: Created slice kubepods-besteffort-pod173cc3de_f812_43ca_a051_9eea91bb3120.slice - libcontainer container kubepods-besteffort-pod173cc3de_f812_43ca_a051_9eea91bb3120.slice. May 13 23:43:26.490628 kubelet[3229]: I0513 23:43:26.490303 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/173cc3de-f812-43ca-a051-9eea91bb3120-tigera-ca-bundle\") pod \"calico-typha-79bccb9478-gn7zt\" (UID: \"173cc3de-f812-43ca-a051-9eea91bb3120\") " pod="calico-system/calico-typha-79bccb9478-gn7zt" May 13 23:43:26.490628 kubelet[3229]: I0513 23:43:26.490372 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/173cc3de-f812-43ca-a051-9eea91bb3120-typha-certs\") pod \"calico-typha-79bccb9478-gn7zt\" (UID: \"173cc3de-f812-43ca-a051-9eea91bb3120\") " pod="calico-system/calico-typha-79bccb9478-gn7zt" May 13 23:43:26.490628 kubelet[3229]: I0513 23:43:26.490418 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88tmx\" (UniqueName: \"kubernetes.io/projected/173cc3de-f812-43ca-a051-9eea91bb3120-kube-api-access-88tmx\") pod \"calico-typha-79bccb9478-gn7zt\" (UID: \"173cc3de-f812-43ca-a051-9eea91bb3120\") " pod="calico-system/calico-typha-79bccb9478-gn7zt" May 13 23:43:26.533335 systemd[1]: Created slice kubepods-besteffort-pod5a109143_6ed4_4877_9c34_a2c980484462.slice - libcontainer container kubepods-besteffort-pod5a109143_6ed4_4877_9c34_a2c980484462.slice. May 13 23:43:26.591507 kubelet[3229]: I0513 23:43:26.591371 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5a109143-6ed4-4877-9c34-a2c980484462-cni-net-dir\") pod \"calico-node-g4hkn\" (UID: \"5a109143-6ed4-4877-9c34-a2c980484462\") " pod="calico-system/calico-node-g4hkn" May 13 23:43:26.591507 kubelet[3229]: I0513 23:43:26.591437 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb86b\" (UniqueName: \"kubernetes.io/projected/5a109143-6ed4-4877-9c34-a2c980484462-kube-api-access-lb86b\") pod \"calico-node-g4hkn\" (UID: \"5a109143-6ed4-4877-9c34-a2c980484462\") " pod="calico-system/calico-node-g4hkn" May 13 23:43:26.591867 kubelet[3229]: I0513 23:43:26.591479 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5a109143-6ed4-4877-9c34-a2c980484462-node-certs\") pod \"calico-node-g4hkn\" (UID: \"5a109143-6ed4-4877-9c34-a2c980484462\") " pod="calico-system/calico-node-g4hkn" May 13 23:43:26.591967 kubelet[3229]: I0513 23:43:26.591939 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a109143-6ed4-4877-9c34-a2c980484462-tigera-ca-bundle\") pod \"calico-node-g4hkn\" (UID: \"5a109143-6ed4-4877-9c34-a2c980484462\") " pod="calico-system/calico-node-g4hkn" May 13 23:43:26.592028 kubelet[3229]: I0513 23:43:26.591979 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5a109143-6ed4-4877-9c34-a2c980484462-cni-bin-dir\") pod \"calico-node-g4hkn\" (UID: \"5a109143-6ed4-4877-9c34-a2c980484462\") " pod="calico-system/calico-node-g4hkn" May 13 23:43:26.593117 kubelet[3229]: I0513 23:43:26.592107 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a109143-6ed4-4877-9c34-a2c980484462-xtables-lock\") pod \"calico-node-g4hkn\" (UID: \"5a109143-6ed4-4877-9c34-a2c980484462\") " pod="calico-system/calico-node-g4hkn" May 13 23:43:26.593117 kubelet[3229]: I0513 23:43:26.592156 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5a109143-6ed4-4877-9c34-a2c980484462-var-run-calico\") pod \"calico-node-g4hkn\" (UID: \"5a109143-6ed4-4877-9c34-a2c980484462\") " pod="calico-system/calico-node-g4hkn" May 13 23:43:26.593117 kubelet[3229]: I0513 23:43:26.592217 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a109143-6ed4-4877-9c34-a2c980484462-var-lib-calico\") pod \"calico-node-g4hkn\" (UID: \"5a109143-6ed4-4877-9c34-a2c980484462\") " pod="calico-system/calico-node-g4hkn" May 13 23:43:26.593117 kubelet[3229]: I0513 23:43:26.592301 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5a109143-6ed4-4877-9c34-a2c980484462-policysync\") pod \"calico-node-g4hkn\" (UID: \"5a109143-6ed4-4877-9c34-a2c980484462\") " pod="calico-system/calico-node-g4hkn" May 13 23:43:26.593117 kubelet[3229]: I0513 23:43:26.592399 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a109143-6ed4-4877-9c34-a2c980484462-lib-modules\") pod \"calico-node-g4hkn\" (UID: \"5a109143-6ed4-4877-9c34-a2c980484462\") " pod="calico-system/calico-node-g4hkn" May 13 23:43:26.593489 kubelet[3229]: I0513 23:43:26.592465 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5a109143-6ed4-4877-9c34-a2c980484462-cni-log-dir\") pod \"calico-node-g4hkn\" (UID: \"5a109143-6ed4-4877-9c34-a2c980484462\") " pod="calico-system/calico-node-g4hkn" May 13 23:43:26.593489 kubelet[3229]: I0513 23:43:26.592508 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5a109143-6ed4-4877-9c34-a2c980484462-flexvol-driver-host\") pod \"calico-node-g4hkn\" (UID: \"5a109143-6ed4-4877-9c34-a2c980484462\") " pod="calico-system/calico-node-g4hkn" May 13 23:43:26.647404 containerd[1940]: time="2025-05-13T23:43:26.646833324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79bccb9478-gn7zt,Uid:173cc3de-f812-43ca-a051-9eea91bb3120,Namespace:calico-system,Attempt:0,}" May 13 23:43:26.710952 kubelet[3229]: E0513 23:43:26.710818 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.713251 kubelet[3229]: W0513 23:43:26.710862 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.720818 kubelet[3229]: E0513 23:43:26.713941 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.720818 kubelet[3229]: E0513 23:43:26.720504 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.720818 kubelet[3229]: W0513 23:43:26.720540 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.720818 kubelet[3229]: E0513 23:43:26.720572 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.726716 kubelet[3229]: E0513 23:43:26.726662 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.726716 kubelet[3229]: W0513 23:43:26.726701 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.726930 kubelet[3229]: E0513 23:43:26.726746 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.728111 kubelet[3229]: E0513 23:43:26.728065 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.728111 kubelet[3229]: W0513 23:43:26.728105 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.729214 kubelet[3229]: E0513 23:43:26.729141 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.729758 kubelet[3229]: E0513 23:43:26.729708 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.729758 kubelet[3229]: W0513 23:43:26.729743 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.730484 kubelet[3229]: E0513 23:43:26.730420 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.730573 containerd[1940]: time="2025-05-13T23:43:26.729972456Z" level=info msg="connecting to shim 64474e1ef9be4c2311f627297759ca0e37e0d451144d03a6f02273605355d6e6" address="unix:///run/containerd/s/67ed080d3dbed6dfc64e1aa00a953567ecc2d3662e52330dd350805aecab98d7" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:26.732043 kubelet[3229]: E0513 23:43:26.731978 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.732043 kubelet[3229]: W0513 23:43:26.732017 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.732320 kubelet[3229]: E0513 23:43:26.732077 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.734309 kubelet[3229]: E0513 23:43:26.733189 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.734309 kubelet[3229]: W0513 23:43:26.733218 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.734309 kubelet[3229]: E0513 23:43:26.733283 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.735032 kubelet[3229]: E0513 23:43:26.734960 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.735032 kubelet[3229]: W0513 23:43:26.734997 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.735032 kubelet[3229]: E0513 23:43:26.735030 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.736704 kubelet[3229]: E0513 23:43:26.736641 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.736704 kubelet[3229]: W0513 23:43:26.736681 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.736915 kubelet[3229]: E0513 23:43:26.736715 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.794255 kubelet[3229]: E0513 23:43:26.793955 3229 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m6fkp" podUID="d371daf2-08ec-44bd-92c2-cd5610ab090d" May 13 23:43:26.796210 kubelet[3229]: E0513 23:43:26.796156 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.796210 kubelet[3229]: W0513 23:43:26.796199 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.796210 kubelet[3229]: E0513 23:43:26.796249 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.801000 kubelet[3229]: E0513 23:43:26.799112 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.801000 kubelet[3229]: W0513 23:43:26.799316 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.801000 kubelet[3229]: E0513 23:43:26.799352 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.802980 kubelet[3229]: E0513 23:43:26.802518 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.802980 kubelet[3229]: W0513 23:43:26.802574 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.802980 kubelet[3229]: E0513 23:43:26.802606 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.804277 kubelet[3229]: E0513 23:43:26.803726 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.804277 kubelet[3229]: W0513 23:43:26.803759 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.804277 kubelet[3229]: E0513 23:43:26.804124 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.807534 kubelet[3229]: E0513 23:43:26.806105 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.807534 kubelet[3229]: W0513 23:43:26.806139 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.807534 kubelet[3229]: E0513 23:43:26.806169 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.810528 kubelet[3229]: E0513 23:43:26.809136 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.810528 kubelet[3229]: W0513 23:43:26.809169 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.810528 kubelet[3229]: E0513 23:43:26.809203 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.811695 kubelet[3229]: E0513 23:43:26.811373 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.811695 kubelet[3229]: W0513 23:43:26.811431 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.811695 kubelet[3229]: E0513 23:43:26.811471 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.813610 kubelet[3229]: E0513 23:43:26.813169 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.813610 kubelet[3229]: W0513 23:43:26.813203 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.813610 kubelet[3229]: E0513 23:43:26.813277 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.818597 kubelet[3229]: E0513 23:43:26.818436 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.818597 kubelet[3229]: W0513 23:43:26.818471 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.818597 kubelet[3229]: E0513 23:43:26.818504 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.821371 kubelet[3229]: E0513 23:43:26.821160 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.821371 kubelet[3229]: W0513 23:43:26.821220 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.821371 kubelet[3229]: E0513 23:43:26.821304 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.824458 kubelet[3229]: E0513 23:43:26.823402 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.824458 kubelet[3229]: W0513 23:43:26.823451 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.824458 kubelet[3229]: E0513 23:43:26.823487 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.827065 kubelet[3229]: E0513 23:43:26.826579 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.827065 kubelet[3229]: W0513 23:43:26.826616 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.827065 kubelet[3229]: E0513 23:43:26.826675 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.828869 kubelet[3229]: E0513 23:43:26.828627 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.828869 kubelet[3229]: W0513 23:43:26.828661 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.828869 kubelet[3229]: E0513 23:43:26.828718 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.831471 kubelet[3229]: E0513 23:43:26.830139 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.831471 kubelet[3229]: W0513 23:43:26.830284 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.831471 kubelet[3229]: E0513 23:43:26.830321 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.832433 kubelet[3229]: E0513 23:43:26.832121 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.832433 kubelet[3229]: W0513 23:43:26.832180 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.832433 kubelet[3229]: E0513 23:43:26.832218 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.834842 kubelet[3229]: E0513 23:43:26.834407 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.834842 kubelet[3229]: W0513 23:43:26.834443 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.834842 kubelet[3229]: E0513 23:43:26.834476 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.835285 systemd[1]: Started cri-containerd-64474e1ef9be4c2311f627297759ca0e37e0d451144d03a6f02273605355d6e6.scope - libcontainer container 64474e1ef9be4c2311f627297759ca0e37e0d451144d03a6f02273605355d6e6. May 13 23:43:26.837557 kubelet[3229]: E0513 23:43:26.835220 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.837557 kubelet[3229]: W0513 23:43:26.835480 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.837557 kubelet[3229]: E0513 23:43:26.835509 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.840140 kubelet[3229]: E0513 23:43:26.838738 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.840907 kubelet[3229]: W0513 23:43:26.840789 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.841209 kubelet[3229]: E0513 23:43:26.841053 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.842003 kubelet[3229]: E0513 23:43:26.841728 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.842003 kubelet[3229]: W0513 23:43:26.841785 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.842003 kubelet[3229]: E0513 23:43:26.841813 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.844710 kubelet[3229]: E0513 23:43:26.844450 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.844710 kubelet[3229]: W0513 23:43:26.844486 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.844710 kubelet[3229]: E0513 23:43:26.844543 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.849737 kubelet[3229]: E0513 23:43:26.849188 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.849737 kubelet[3229]: W0513 23:43:26.849298 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.849737 kubelet[3229]: E0513 23:43:26.849335 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.850483 containerd[1940]: time="2025-05-13T23:43:26.850417117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g4hkn,Uid:5a109143-6ed4-4877-9c34-a2c980484462,Namespace:calico-system,Attempt:0,}" May 13 23:43:26.899289 kubelet[3229]: E0513 23:43:26.898793 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.899289 kubelet[3229]: W0513 23:43:26.898832 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.899457 kubelet[3229]: E0513 23:43:26.899318 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.899542 kubelet[3229]: I0513 23:43:26.899499 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krfmg\" (UniqueName: \"kubernetes.io/projected/d371daf2-08ec-44bd-92c2-cd5610ab090d-kube-api-access-krfmg\") pod \"csi-node-driver-m6fkp\" (UID: \"d371daf2-08ec-44bd-92c2-cd5610ab090d\") " pod="calico-system/csi-node-driver-m6fkp" May 13 23:43:26.901447 kubelet[3229]: E0513 23:43:26.901096 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.901447 kubelet[3229]: W0513 23:43:26.901146 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.902053 kubelet[3229]: E0513 23:43:26.901624 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.903246 kubelet[3229]: I0513 23:43:26.902352 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d371daf2-08ec-44bd-92c2-cd5610ab090d-varrun\") pod \"csi-node-driver-m6fkp\" (UID: \"d371daf2-08ec-44bd-92c2-cd5610ab090d\") " pod="calico-system/csi-node-driver-m6fkp" May 13 23:43:26.904958 kubelet[3229]: E0513 23:43:26.904911 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.904958 kubelet[3229]: W0513 23:43:26.904949 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.905171 kubelet[3229]: E0513 23:43:26.904996 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.906690 kubelet[3229]: E0513 23:43:26.906399 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.906690 kubelet[3229]: W0513 23:43:26.906444 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.906690 kubelet[3229]: E0513 23:43:26.906517 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.907520 kubelet[3229]: E0513 23:43:26.907476 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.907520 kubelet[3229]: W0513 23:43:26.907513 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.907986 kubelet[3229]: E0513 23:43:26.907825 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.909334 containerd[1940]: time="2025-05-13T23:43:26.908579125Z" level=info msg="connecting to shim b0718cdb4426a2661edee8ea588e3af6e9490a0bbf63f2437af4d9ea9dd49566" address="unix:///run/containerd/s/119206a1aaaca3df7d5a309b3b97a4364e13cd53d8a9b01caf2f281012eaa32e" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:26.909490 kubelet[3229]: E0513 23:43:26.908986 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.909490 kubelet[3229]: W0513 23:43:26.909028 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.910343 kubelet[3229]: E0513 23:43:26.909213 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.910343 kubelet[3229]: I0513 23:43:26.910255 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d371daf2-08ec-44bd-92c2-cd5610ab090d-kubelet-dir\") pod \"csi-node-driver-m6fkp\" (UID: \"d371daf2-08ec-44bd-92c2-cd5610ab090d\") " pod="calico-system/csi-node-driver-m6fkp" May 13 23:43:26.911614 kubelet[3229]: E0513 23:43:26.911215 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.911614 kubelet[3229]: W0513 23:43:26.911279 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.911614 kubelet[3229]: E0513 23:43:26.911314 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.914095 kubelet[3229]: E0513 23:43:26.913457 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.914095 kubelet[3229]: W0513 23:43:26.913498 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.914095 kubelet[3229]: E0513 23:43:26.913546 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.914705 kubelet[3229]: E0513 23:43:26.914550 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.914705 kubelet[3229]: W0513 23:43:26.914586 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.916396 kubelet[3229]: E0513 23:43:26.915264 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.916396 kubelet[3229]: E0513 23:43:26.916330 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.916396 kubelet[3229]: W0513 23:43:26.916357 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.916396 kubelet[3229]: E0513 23:43:26.916389 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.916690 kubelet[3229]: I0513 23:43:26.916440 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d371daf2-08ec-44bd-92c2-cd5610ab090d-registration-dir\") pod \"csi-node-driver-m6fkp\" (UID: \"d371daf2-08ec-44bd-92c2-cd5610ab090d\") " pod="calico-system/csi-node-driver-m6fkp" May 13 23:43:26.919453 kubelet[3229]: E0513 23:43:26.919376 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.919453 kubelet[3229]: W0513 23:43:26.919421 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.919453 kubelet[3229]: E0513 23:43:26.919467 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.920669 kubelet[3229]: I0513 23:43:26.919511 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d371daf2-08ec-44bd-92c2-cd5610ab090d-socket-dir\") pod \"csi-node-driver-m6fkp\" (UID: \"d371daf2-08ec-44bd-92c2-cd5610ab090d\") " pod="calico-system/csi-node-driver-m6fkp" May 13 23:43:26.920669 kubelet[3229]: E0513 23:43:26.920009 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.920669 kubelet[3229]: W0513 23:43:26.920033 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.920669 kubelet[3229]: E0513 23:43:26.920093 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.922279 kubelet[3229]: E0513 23:43:26.921876 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.922279 kubelet[3229]: W0513 23:43:26.921911 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.922279 kubelet[3229]: E0513 23:43:26.921958 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.925175 kubelet[3229]: E0513 23:43:26.924093 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.925175 kubelet[3229]: W0513 23:43:26.924851 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.925175 kubelet[3229]: E0513 23:43:26.924894 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.927717 kubelet[3229]: E0513 23:43:26.927511 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:26.927717 kubelet[3229]: W0513 23:43:26.927546 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:26.927717 kubelet[3229]: E0513 23:43:26.927579 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:26.982572 systemd[1]: Started cri-containerd-b0718cdb4426a2661edee8ea588e3af6e9490a0bbf63f2437af4d9ea9dd49566.scope - libcontainer container b0718cdb4426a2661edee8ea588e3af6e9490a0bbf63f2437af4d9ea9dd49566. May 13 23:43:27.021313 kubelet[3229]: E0513 23:43:27.021260 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.021313 kubelet[3229]: W0513 23:43:27.021300 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.021313 kubelet[3229]: E0513 23:43:27.021333 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.022267 kubelet[3229]: E0513 23:43:27.022205 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.022267 kubelet[3229]: W0513 23:43:27.022258 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.022267 kubelet[3229]: E0513 23:43:27.022301 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.023217 kubelet[3229]: E0513 23:43:27.023162 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.023217 kubelet[3229]: W0513 23:43:27.023198 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.023217 kubelet[3229]: E0513 23:43:27.023258 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.025125 kubelet[3229]: E0513 23:43:27.024975 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.025125 kubelet[3229]: W0513 23:43:27.025010 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.025125 kubelet[3229]: E0513 23:43:27.025081 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.025582 kubelet[3229]: E0513 23:43:27.025541 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.025582 kubelet[3229]: W0513 23:43:27.025571 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.025728 kubelet[3229]: E0513 23:43:27.025633 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.026911 kubelet[3229]: E0513 23:43:27.026856 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.026911 kubelet[3229]: W0513 23:43:27.026895 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.027164 kubelet[3229]: E0513 23:43:27.027126 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.027450 kubelet[3229]: E0513 23:43:27.027411 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.027450 kubelet[3229]: W0513 23:43:27.027441 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.027826 kubelet[3229]: E0513 23:43:27.027593 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.028018 kubelet[3229]: E0513 23:43:27.027976 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.028018 kubelet[3229]: W0513 23:43:27.028005 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.028551 kubelet[3229]: E0513 23:43:27.028500 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.028997 kubelet[3229]: E0513 23:43:27.028958 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.028997 kubelet[3229]: W0513 23:43:27.028989 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.029299 kubelet[3229]: E0513 23:43:27.029260 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.029954 kubelet[3229]: E0513 23:43:27.029909 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.029954 kubelet[3229]: W0513 23:43:27.029942 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.031114 kubelet[3229]: E0513 23:43:27.030960 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.031489 kubelet[3229]: E0513 23:43:27.031449 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.031489 kubelet[3229]: W0513 23:43:27.031481 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.031784 kubelet[3229]: E0513 23:43:27.031631 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.032626 kubelet[3229]: E0513 23:43:27.032579 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.032626 kubelet[3229]: W0513 23:43:27.032614 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.032626 kubelet[3229]: E0513 23:43:27.032772 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.036186 kubelet[3229]: E0513 23:43:27.033779 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.036186 kubelet[3229]: W0513 23:43:27.034057 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.036512 kubelet[3229]: E0513 23:43:27.036456 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.037407 kubelet[3229]: E0513 23:43:27.037360 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.037407 kubelet[3229]: W0513 23:43:27.037395 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.038490 kubelet[3229]: E0513 23:43:27.038432 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.039846 kubelet[3229]: E0513 23:43:27.039622 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.039846 kubelet[3229]: W0513 23:43:27.039828 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.040289 kubelet[3229]: E0513 23:43:27.040193 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.042064 kubelet[3229]: E0513 23:43:27.041898 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.042064 kubelet[3229]: W0513 23:43:27.042050 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.042285 kubelet[3229]: E0513 23:43:27.042136 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.043215 kubelet[3229]: E0513 23:43:27.043163 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.043215 kubelet[3229]: W0513 23:43:27.043200 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.043658 kubelet[3229]: E0513 23:43:27.043445 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.045514 kubelet[3229]: E0513 23:43:27.045398 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.045514 kubelet[3229]: W0513 23:43:27.045463 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.046300 kubelet[3229]: E0513 23:43:27.046007 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.046607 kubelet[3229]: E0513 23:43:27.046534 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.046607 kubelet[3229]: W0513 23:43:27.046567 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.046722 kubelet[3229]: E0513 23:43:27.046685 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.047347 kubelet[3229]: E0513 23:43:27.047289 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.047347 kubelet[3229]: W0513 23:43:27.047326 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.048414 kubelet[3229]: E0513 23:43:27.047908 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.049496 kubelet[3229]: E0513 23:43:27.049441 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.049496 kubelet[3229]: W0513 23:43:27.049481 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.049943 kubelet[3229]: E0513 23:43:27.049712 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.050178 kubelet[3229]: E0513 23:43:27.050143 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.052448 kubelet[3229]: W0513 23:43:27.052288 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.052808 kubelet[3229]: E0513 23:43:27.052550 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.053282 kubelet[3229]: E0513 23:43:27.053213 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.053282 kubelet[3229]: W0513 23:43:27.053271 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.053654 kubelet[3229]: E0513 23:43:27.053448 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.054075 kubelet[3229]: E0513 23:43:27.053987 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.054075 kubelet[3229]: W0513 23:43:27.054020 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.054075 kubelet[3229]: E0513 23:43:27.054061 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.054836 kubelet[3229]: E0513 23:43:27.054799 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.054836 kubelet[3229]: W0513 23:43:27.054830 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.055068 kubelet[3229]: E0513 23:43:27.054859 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.075866 kubelet[3229]: E0513 23:43:27.075816 3229 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:43:27.075866 kubelet[3229]: W0513 23:43:27.075852 3229 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:43:27.076104 kubelet[3229]: E0513 23:43:27.075886 3229 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:43:27.195618 containerd[1940]: time="2025-05-13T23:43:27.195140254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g4hkn,Uid:5a109143-6ed4-4877-9c34-a2c980484462,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0718cdb4426a2661edee8ea588e3af6e9490a0bbf63f2437af4d9ea9dd49566\"" May 13 23:43:27.208649 containerd[1940]: time="2025-05-13T23:43:27.208563646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 23:43:27.213430 containerd[1940]: time="2025-05-13T23:43:27.213299854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79bccb9478-gn7zt,Uid:173cc3de-f812-43ca-a051-9eea91bb3120,Namespace:calico-system,Attempt:0,} returns sandbox id \"64474e1ef9be4c2311f627297759ca0e37e0d451144d03a6f02273605355d6e6\"" May 13 23:43:28.700661 containerd[1940]: time="2025-05-13T23:43:28.700578002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:28.702438 containerd[1940]: time="2025-05-13T23:43:28.702346154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 13 23:43:28.705271 containerd[1940]: time="2025-05-13T23:43:28.705033374Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:28.710032 containerd[1940]: time="2025-05-13T23:43:28.709846658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:28.711705 containerd[1940]: time="2025-05-13T23:43:28.711105950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.501602248s" May 13 23:43:28.711705 containerd[1940]: time="2025-05-13T23:43:28.711170762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 23:43:28.718095 containerd[1940]: time="2025-05-13T23:43:28.717903542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 23:43:28.723882 containerd[1940]: time="2025-05-13T23:43:28.723183266Z" level=info msg="CreateContainer within sandbox \"b0718cdb4426a2661edee8ea588e3af6e9490a0bbf63f2437af4d9ea9dd49566\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:43:28.744379 containerd[1940]: time="2025-05-13T23:43:28.744129146Z" level=info msg="Container 22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:28.776789 containerd[1940]: time="2025-05-13T23:43:28.776602214Z" level=info msg="CreateContainer within sandbox \"b0718cdb4426a2661edee8ea588e3af6e9490a0bbf63f2437af4d9ea9dd49566\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd\"" May 13 23:43:28.779144 containerd[1940]: time="2025-05-13T23:43:28.778973246Z" level=info msg="StartContainer for \"22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd\"" May 13 23:43:28.783968 containerd[1940]: time="2025-05-13T23:43:28.783898754Z" level=info msg="connecting to shim 22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd" address="unix:///run/containerd/s/119206a1aaaca3df7d5a309b3b97a4364e13cd53d8a9b01caf2f281012eaa32e" protocol=ttrpc version=3 May 13 23:43:28.842553 systemd[1]: Started cri-containerd-22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd.scope - libcontainer container 22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd. May 13 23:43:28.935144 containerd[1940]: time="2025-05-13T23:43:28.935053971Z" level=info msg="StartContainer for \"22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd\" returns successfully" May 13 23:43:28.966447 systemd[1]: cri-containerd-22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd.scope: Deactivated successfully. May 13 23:43:28.973158 containerd[1940]: time="2025-05-13T23:43:28.973103691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd\" id:\"22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd\" pid:3900 exited_at:{seconds:1747179808 nanos:972512019}" May 13 23:43:28.973332 containerd[1940]: time="2025-05-13T23:43:28.973276551Z" level=info msg="received exit event container_id:\"22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd\" id:\"22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd\" pid:3900 exited_at:{seconds:1747179808 nanos:972512019}" May 13 23:43:29.010857 kubelet[3229]: E0513 23:43:29.010107 3229 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m6fkp" podUID="d371daf2-08ec-44bd-92c2-cd5610ab090d" May 13 23:43:29.032526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22a206e8dd1b71154bcb9dd71c9b0e0d1c0667a1ffe997aae48d7d2c62d572fd-rootfs.mount: Deactivated successfully. May 13 23:43:30.868020 containerd[1940]: time="2025-05-13T23:43:30.867956345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:30.869352 containerd[1940]: time="2025-05-13T23:43:30.869247965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 23:43:30.870631 containerd[1940]: time="2025-05-13T23:43:30.870554309Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:30.873631 containerd[1940]: time="2025-05-13T23:43:30.873587165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:30.875025 containerd[1940]: time="2025-05-13T23:43:30.874846229Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.156878835s" May 13 23:43:30.875025 containerd[1940]: time="2025-05-13T23:43:30.874892165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 23:43:30.877171 containerd[1940]: time="2025-05-13T23:43:30.876761297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 23:43:30.900973 containerd[1940]: time="2025-05-13T23:43:30.900811613Z" level=info msg="CreateContainer within sandbox \"64474e1ef9be4c2311f627297759ca0e37e0d451144d03a6f02273605355d6e6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 23:43:30.921344 containerd[1940]: time="2025-05-13T23:43:30.919527149Z" level=info msg="Container 73842d316053307a8f98b243d9706ef50f8d0d7fa6ccffb4fc491794d3ced7ad: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:30.947171 containerd[1940]: time="2025-05-13T23:43:30.947112485Z" level=info msg="CreateContainer within sandbox \"64474e1ef9be4c2311f627297759ca0e37e0d451144d03a6f02273605355d6e6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"73842d316053307a8f98b243d9706ef50f8d0d7fa6ccffb4fc491794d3ced7ad\"" May 13 23:43:30.948807 containerd[1940]: time="2025-05-13T23:43:30.948743585Z" level=info msg="StartContainer for \"73842d316053307a8f98b243d9706ef50f8d0d7fa6ccffb4fc491794d3ced7ad\"" May 13 23:43:30.952413 containerd[1940]: time="2025-05-13T23:43:30.952353593Z" level=info msg="connecting to shim 73842d316053307a8f98b243d9706ef50f8d0d7fa6ccffb4fc491794d3ced7ad" address="unix:///run/containerd/s/67ed080d3dbed6dfc64e1aa00a953567ecc2d3662e52330dd350805aecab98d7" protocol=ttrpc version=3 May 13 23:43:31.000633 systemd[1]: Started cri-containerd-73842d316053307a8f98b243d9706ef50f8d0d7fa6ccffb4fc491794d3ced7ad.scope - libcontainer container 73842d316053307a8f98b243d9706ef50f8d0d7fa6ccffb4fc491794d3ced7ad. May 13 23:43:31.010160 kubelet[3229]: E0513 23:43:31.009800 3229 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m6fkp" podUID="d371daf2-08ec-44bd-92c2-cd5610ab090d" May 13 23:43:31.100373 containerd[1940]: time="2025-05-13T23:43:31.100067246Z" level=info msg="StartContainer for \"73842d316053307a8f98b243d9706ef50f8d0d7fa6ccffb4fc491794d3ced7ad\" returns successfully" May 13 23:43:32.278033 kubelet[3229]: I0513 23:43:32.276843 3229 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:43:33.009516 kubelet[3229]: E0513 23:43:33.009337 3229 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m6fkp" podUID="d371daf2-08ec-44bd-92c2-cd5610ab090d" May 13 23:43:34.941659 containerd[1940]: time="2025-05-13T23:43:34.941584953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:34.943449 containerd[1940]: time="2025-05-13T23:43:34.943311621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 23:43:34.944755 containerd[1940]: time="2025-05-13T23:43:34.944675637Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:34.948141 containerd[1940]: time="2025-05-13T23:43:34.948062241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:34.949433 containerd[1940]: time="2025-05-13T23:43:34.949392309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.072575584s" May 13 23:43:34.949666 containerd[1940]: time="2025-05-13T23:43:34.949535385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 23:43:34.953472 containerd[1940]: time="2025-05-13T23:43:34.952842921Z" level=info msg="CreateContainer within sandbox \"b0718cdb4426a2661edee8ea588e3af6e9490a0bbf63f2437af4d9ea9dd49566\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:43:34.970298 containerd[1940]: time="2025-05-13T23:43:34.970199577Z" level=info msg="Container 4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:34.998252 containerd[1940]: time="2025-05-13T23:43:34.997463145Z" level=info msg="CreateContainer within sandbox \"b0718cdb4426a2661edee8ea588e3af6e9490a0bbf63f2437af4d9ea9dd49566\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4\"" May 13 23:43:35.002443 containerd[1940]: time="2025-05-13T23:43:35.002371697Z" level=info msg="StartContainer for \"4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4\"" May 13 23:43:35.007577 containerd[1940]: time="2025-05-13T23:43:35.007523261Z" level=info msg="connecting to shim 4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4" address="unix:///run/containerd/s/119206a1aaaca3df7d5a309b3b97a4364e13cd53d8a9b01caf2f281012eaa32e" protocol=ttrpc version=3 May 13 23:43:35.009840 kubelet[3229]: E0513 23:43:35.009772 3229 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m6fkp" podUID="d371daf2-08ec-44bd-92c2-cd5610ab090d" May 13 23:43:35.052537 systemd[1]: Started cri-containerd-4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4.scope - libcontainer container 4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4. May 13 23:43:35.145404 containerd[1940]: time="2025-05-13T23:43:35.145295526Z" level=info msg="StartContainer for \"4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4\" returns successfully" May 13 23:43:35.357276 kubelet[3229]: I0513 23:43:35.354097 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79bccb9478-gn7zt" podStartSLOduration=5.69602404 podStartE2EDuration="9.354073843s" podCreationTimestamp="2025-05-13 23:43:26 +0000 UTC" firstStartedPulling="2025-05-13 23:43:27.21830875 +0000 UTC m=+15.386911661" lastFinishedPulling="2025-05-13 23:43:30.876358469 +0000 UTC m=+19.044961464" observedRunningTime="2025-05-13 23:43:31.330509127 +0000 UTC m=+19.499112050" watchObservedRunningTime="2025-05-13 23:43:35.354073843 +0000 UTC m=+23.522676766" May 13 23:43:35.706602 kubelet[3229]: I0513 23:43:35.706471 3229 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:43:36.126820 containerd[1940]: time="2025-05-13T23:43:36.126718507Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:43:36.132133 systemd[1]: cri-containerd-4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4.scope: Deactivated successfully. May 13 23:43:36.133482 systemd[1]: cri-containerd-4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4.scope: Consumed 863ms CPU time, 177.9M memory peak, 150.3M written to disk. May 13 23:43:36.138069 containerd[1940]: time="2025-05-13T23:43:36.137987239Z" level=info msg="received exit event container_id:\"4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4\" id:\"4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4\" pid:3998 exited_at:{seconds:1747179816 nanos:137681467}" May 13 23:43:36.138835 containerd[1940]: time="2025-05-13T23:43:36.138679627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4\" id:\"4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4\" pid:3998 exited_at:{seconds:1747179816 nanos:137681467}" May 13 23:43:36.150278 kubelet[3229]: I0513 23:43:36.149738 3229 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 23:43:36.197669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4da346b478393b382b6a1c7c8eed9f9f4888919d7206a33107d74bed5fa0f8a4-rootfs.mount: Deactivated successfully. May 13 23:43:36.238376 systemd[1]: Created slice kubepods-burstable-pod5a4a3961_4607_4b8f_acb2_d86d095f7cf8.slice - libcontainer container kubepods-burstable-pod5a4a3961_4607_4b8f_acb2_d86d095f7cf8.slice. May 13 23:43:36.282328 systemd[1]: Created slice kubepods-besteffort-podcdca3ef4_4276_4570_a8a4_faa7c5ce116d.slice - libcontainer container kubepods-besteffort-podcdca3ef4_4276_4570_a8a4_faa7c5ce116d.slice. May 13 23:43:36.304860 kubelet[3229]: I0513 23:43:36.304714 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4a3961-4607-4b8f-acb2-d86d095f7cf8-config-volume\") pod \"coredns-6f6b679f8f-ds8br\" (UID: \"5a4a3961-4607-4b8f-acb2-d86d095f7cf8\") " pod="kube-system/coredns-6f6b679f8f-ds8br" May 13 23:43:36.304860 kubelet[3229]: I0513 23:43:36.304795 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2rv\" (UniqueName: \"kubernetes.io/projected/5a4a3961-4607-4b8f-acb2-d86d095f7cf8-kube-api-access-4q2rv\") pod \"coredns-6f6b679f8f-ds8br\" (UID: \"5a4a3961-4607-4b8f-acb2-d86d095f7cf8\") " pod="kube-system/coredns-6f6b679f8f-ds8br" May 13 23:43:36.306879 systemd[1]: Created slice kubepods-burstable-pod26776fd4_9f96_41d9_b20a_4c6815a3c6a5.slice - libcontainer container kubepods-burstable-pod26776fd4_9f96_41d9_b20a_4c6815a3c6a5.slice. May 13 23:43:36.342860 systemd[1]: Created slice kubepods-besteffort-podcda0cd73_2f14_479a_a37f_508f0d07bf98.slice - libcontainer container kubepods-besteffort-podcda0cd73_2f14_479a_a37f_508f0d07bf98.slice. May 13 23:43:36.361544 systemd[1]: Created slice kubepods-besteffort-pode1ce5cf6_1384_44ba_ad34_7ba79f4d8925.slice - libcontainer container kubepods-besteffort-pode1ce5cf6_1384_44ba_ad34_7ba79f4d8925.slice. May 13 23:43:36.406086 kubelet[3229]: I0513 23:43:36.405929 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1ce5cf6-1384-44ba-ad34-7ba79f4d8925-calico-apiserver-certs\") pod \"calico-apiserver-7c75d84548-wpkv4\" (UID: \"e1ce5cf6-1384-44ba-ad34-7ba79f4d8925\") " pod="calico-apiserver/calico-apiserver-7c75d84548-wpkv4" May 13 23:43:36.406086 kubelet[3229]: I0513 23:43:36.406009 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdca3ef4-4276-4570-a8a4-faa7c5ce116d-tigera-ca-bundle\") pod \"calico-kube-controllers-845cb5b5b9-57b2c\" (UID: \"cdca3ef4-4276-4570-a8a4-faa7c5ce116d\") " pod="calico-system/calico-kube-controllers-845cb5b5b9-57b2c" May 13 23:43:36.406086 kubelet[3229]: I0513 23:43:36.406049 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cjft\" (UniqueName: \"kubernetes.io/projected/26776fd4-9f96-41d9-b20a-4c6815a3c6a5-kube-api-access-4cjft\") pod \"coredns-6f6b679f8f-r74k7\" (UID: \"26776fd4-9f96-41d9-b20a-4c6815a3c6a5\") " pod="kube-system/coredns-6f6b679f8f-r74k7" May 13 23:43:36.406368 kubelet[3229]: I0513 23:43:36.406097 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26776fd4-9f96-41d9-b20a-4c6815a3c6a5-config-volume\") pod \"coredns-6f6b679f8f-r74k7\" (UID: \"26776fd4-9f96-41d9-b20a-4c6815a3c6a5\") " pod="kube-system/coredns-6f6b679f8f-r74k7" May 13 23:43:36.406368 kubelet[3229]: I0513 23:43:36.406143 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cda0cd73-2f14-479a-a37f-508f0d07bf98-calico-apiserver-certs\") pod \"calico-apiserver-7c75d84548-n6glm\" (UID: \"cda0cd73-2f14-479a-a37f-508f0d07bf98\") " pod="calico-apiserver/calico-apiserver-7c75d84548-n6glm" May 13 23:43:36.406368 kubelet[3229]: I0513 23:43:36.406242 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d728\" (UniqueName: \"kubernetes.io/projected/cda0cd73-2f14-479a-a37f-508f0d07bf98-kube-api-access-7d728\") pod \"calico-apiserver-7c75d84548-n6glm\" (UID: \"cda0cd73-2f14-479a-a37f-508f0d07bf98\") " pod="calico-apiserver/calico-apiserver-7c75d84548-n6glm" May 13 23:43:36.406368 kubelet[3229]: I0513 23:43:36.406327 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d85ct\" (UniqueName: \"kubernetes.io/projected/cdca3ef4-4276-4570-a8a4-faa7c5ce116d-kube-api-access-d85ct\") pod \"calico-kube-controllers-845cb5b5b9-57b2c\" (UID: \"cdca3ef4-4276-4570-a8a4-faa7c5ce116d\") " pod="calico-system/calico-kube-controllers-845cb5b5b9-57b2c" May 13 23:43:36.406586 kubelet[3229]: I0513 23:43:36.406412 3229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84vpn\" (UniqueName: \"kubernetes.io/projected/e1ce5cf6-1384-44ba-ad34-7ba79f4d8925-kube-api-access-84vpn\") pod \"calico-apiserver-7c75d84548-wpkv4\" (UID: \"e1ce5cf6-1384-44ba-ad34-7ba79f4d8925\") " pod="calico-apiserver/calico-apiserver-7c75d84548-wpkv4" May 13 23:43:36.563170 containerd[1940]: time="2025-05-13T23:43:36.561672957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ds8br,Uid:5a4a3961-4607-4b8f-acb2-d86d095f7cf8,Namespace:kube-system,Attempt:0,}" May 13 23:43:36.594004 containerd[1940]: time="2025-05-13T23:43:36.593919525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-845cb5b5b9-57b2c,Uid:cdca3ef4-4276-4570-a8a4-faa7c5ce116d,Namespace:calico-system,Attempt:0,}" May 13 23:43:36.620406 containerd[1940]: time="2025-05-13T23:43:36.620271393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r74k7,Uid:26776fd4-9f96-41d9-b20a-4c6815a3c6a5,Namespace:kube-system,Attempt:0,}" May 13 23:43:36.655877 containerd[1940]: time="2025-05-13T23:43:36.655680333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c75d84548-n6glm,Uid:cda0cd73-2f14-479a-a37f-508f0d07bf98,Namespace:calico-apiserver,Attempt:0,}" May 13 23:43:36.672576 containerd[1940]: time="2025-05-13T23:43:36.672387117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c75d84548-wpkv4,Uid:e1ce5cf6-1384-44ba-ad34-7ba79f4d8925,Namespace:calico-apiserver,Attempt:0,}" May 13 23:43:36.909154 containerd[1940]: time="2025-05-13T23:43:36.908941307Z" level=error msg="Failed to destroy network for sandbox \"c4e520df8d2130ca7b040c98ec65b47538b793294881baafd584c6ce137cbe1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.021197 systemd[1]: Created slice kubepods-besteffort-podd371daf2_08ec_44bd_92c2_cd5610ab090d.slice - libcontainer container kubepods-besteffort-podd371daf2_08ec_44bd_92c2_cd5610ab090d.slice. May 13 23:43:37.026362 containerd[1940]: time="2025-05-13T23:43:37.026183311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m6fkp,Uid:d371daf2-08ec-44bd-92c2-cd5610ab090d,Namespace:calico-system,Attempt:0,}" May 13 23:43:37.254288 containerd[1940]: time="2025-05-13T23:43:37.251646428Z" level=error msg="Failed to destroy network for sandbox \"683b53fd5b843eea07654013edce0b64907c3c7e04e582b7494e935ee74d8e46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.256297 systemd[1]: run-netns-cni\x2d312aefe8\x2df84d\x2d3479\x2d5b13\x2d97f64ec5d8fe.mount: Deactivated successfully. May 13 23:43:37.378252 containerd[1940]: time="2025-05-13T23:43:37.378083685Z" level=error msg="Failed to destroy network for sandbox \"b29decdc23a3baa6ee0d0690e393729036dbe0582cc1dc117b099c712bf6a095\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.382095 systemd[1]: run-netns-cni\x2d5ebf11a1\x2dded6\x2da485\x2d0dae\x2d653f11f9ee4a.mount: Deactivated successfully. May 13 23:43:37.441812 containerd[1940]: time="2025-05-13T23:43:37.441475401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ds8br,Uid:5a4a3961-4607-4b8f-acb2-d86d095f7cf8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4e520df8d2130ca7b040c98ec65b47538b793294881baafd584c6ce137cbe1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.442161 kubelet[3229]: E0513 23:43:37.442062 3229 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4e520df8d2130ca7b040c98ec65b47538b793294881baafd584c6ce137cbe1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.442762 kubelet[3229]: E0513 23:43:37.442194 3229 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4e520df8d2130ca7b040c98ec65b47538b793294881baafd584c6ce137cbe1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ds8br" May 13 23:43:37.442762 kubelet[3229]: E0513 23:43:37.442248 3229 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4e520df8d2130ca7b040c98ec65b47538b793294881baafd584c6ce137cbe1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ds8br" May 13 23:43:37.442762 kubelet[3229]: E0513 23:43:37.442328 3229 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ds8br_kube-system(5a4a3961-4607-4b8f-acb2-d86d095f7cf8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ds8br_kube-system(5a4a3961-4607-4b8f-acb2-d86d095f7cf8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4e520df8d2130ca7b040c98ec65b47538b793294881baafd584c6ce137cbe1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ds8br" podUID="5a4a3961-4607-4b8f-acb2-d86d095f7cf8" May 13 23:43:37.468021 containerd[1940]: time="2025-05-13T23:43:37.467730477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-845cb5b5b9-57b2c,Uid:cdca3ef4-4276-4570-a8a4-faa7c5ce116d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"683b53fd5b843eea07654013edce0b64907c3c7e04e582b7494e935ee74d8e46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.470688 kubelet[3229]: E0513 23:43:37.469042 3229 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"683b53fd5b843eea07654013edce0b64907c3c7e04e582b7494e935ee74d8e46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.470688 kubelet[3229]: E0513 23:43:37.469130 3229 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"683b53fd5b843eea07654013edce0b64907c3c7e04e582b7494e935ee74d8e46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-845cb5b5b9-57b2c" May 13 23:43:37.470688 kubelet[3229]: E0513 23:43:37.469164 3229 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"683b53fd5b843eea07654013edce0b64907c3c7e04e582b7494e935ee74d8e46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-845cb5b5b9-57b2c" May 13 23:43:37.472316 kubelet[3229]: E0513 23:43:37.470351 3229 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-845cb5b5b9-57b2c_calico-system(cdca3ef4-4276-4570-a8a4-faa7c5ce116d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-845cb5b5b9-57b2c_calico-system(cdca3ef4-4276-4570-a8a4-faa7c5ce116d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"683b53fd5b843eea07654013edce0b64907c3c7e04e582b7494e935ee74d8e46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-845cb5b5b9-57b2c" podUID="cdca3ef4-4276-4570-a8a4-faa7c5ce116d" May 13 23:43:37.476340 containerd[1940]: time="2025-05-13T23:43:37.475972353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r74k7,Uid:26776fd4-9f96-41d9-b20a-4c6815a3c6a5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29decdc23a3baa6ee0d0690e393729036dbe0582cc1dc117b099c712bf6a095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.477209 kubelet[3229]: E0513 23:43:37.476880 3229 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29decdc23a3baa6ee0d0690e393729036dbe0582cc1dc117b099c712bf6a095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.477209 kubelet[3229]: E0513 23:43:37.476957 3229 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29decdc23a3baa6ee0d0690e393729036dbe0582cc1dc117b099c712bf6a095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-r74k7" May 13 23:43:37.477209 kubelet[3229]: E0513 23:43:37.476995 3229 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29decdc23a3baa6ee0d0690e393729036dbe0582cc1dc117b099c712bf6a095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-r74k7" May 13 23:43:37.477735 kubelet[3229]: E0513 23:43:37.477054 3229 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-r74k7_kube-system(26776fd4-9f96-41d9-b20a-4c6815a3c6a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-r74k7_kube-system(26776fd4-9f96-41d9-b20a-4c6815a3c6a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b29decdc23a3baa6ee0d0690e393729036dbe0582cc1dc117b099c712bf6a095\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-r74k7" podUID="26776fd4-9f96-41d9-b20a-4c6815a3c6a5" May 13 23:43:37.570254 containerd[1940]: time="2025-05-13T23:43:37.568140142Z" level=error msg="Failed to destroy network for sandbox \"b64bf268376a488e44611a732f3fb7d8f8e52f2918e2c26ad63e40cbc6c75722\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.572565 systemd[1]: run-netns-cni\x2da794c95f\x2dd008\x2d7529\x2db757\x2df94a217c73bf.mount: Deactivated successfully. May 13 23:43:37.577580 containerd[1940]: time="2025-05-13T23:43:37.577200718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c75d84548-n6glm,Uid:cda0cd73-2f14-479a-a37f-508f0d07bf98,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b64bf268376a488e44611a732f3fb7d8f8e52f2918e2c26ad63e40cbc6c75722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.579706 kubelet[3229]: E0513 23:43:37.578283 3229 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b64bf268376a488e44611a732f3fb7d8f8e52f2918e2c26ad63e40cbc6c75722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.579706 kubelet[3229]: E0513 23:43:37.578362 3229 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b64bf268376a488e44611a732f3fb7d8f8e52f2918e2c26ad63e40cbc6c75722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c75d84548-n6glm" May 13 23:43:37.579706 kubelet[3229]: E0513 23:43:37.578399 3229 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b64bf268376a488e44611a732f3fb7d8f8e52f2918e2c26ad63e40cbc6c75722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c75d84548-n6glm" May 13 23:43:37.579966 kubelet[3229]: E0513 23:43:37.579628 3229 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c75d84548-n6glm_calico-apiserver(cda0cd73-2f14-479a-a37f-508f0d07bf98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c75d84548-n6glm_calico-apiserver(cda0cd73-2f14-479a-a37f-508f0d07bf98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b64bf268376a488e44611a732f3fb7d8f8e52f2918e2c26ad63e40cbc6c75722\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c75d84548-n6glm" podUID="cda0cd73-2f14-479a-a37f-508f0d07bf98" May 13 23:43:37.600654 containerd[1940]: time="2025-05-13T23:43:37.600594334Z" level=error msg="Failed to destroy network for sandbox \"06ddadcfb18b3781441650b3e086f1f548b041ba5556b9c0f951ff75a2d59fb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.603586 containerd[1940]: time="2025-05-13T23:43:37.603519982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c75d84548-wpkv4,Uid:e1ce5cf6-1384-44ba-ad34-7ba79f4d8925,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ddadcfb18b3781441650b3e086f1f548b041ba5556b9c0f951ff75a2d59fb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.604935 kubelet[3229]: E0513 23:43:37.604213 3229 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ddadcfb18b3781441650b3e086f1f548b041ba5556b9c0f951ff75a2d59fb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.605428 kubelet[3229]: E0513 23:43:37.605372 3229 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ddadcfb18b3781441650b3e086f1f548b041ba5556b9c0f951ff75a2d59fb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c75d84548-wpkv4" May 13 23:43:37.605527 kubelet[3229]: E0513 23:43:37.605455 3229 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ddadcfb18b3781441650b3e086f1f548b041ba5556b9c0f951ff75a2d59fb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c75d84548-wpkv4" May 13 23:43:37.605616 kubelet[3229]: E0513 23:43:37.605564 3229 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c75d84548-wpkv4_calico-apiserver(e1ce5cf6-1384-44ba-ad34-7ba79f4d8925)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c75d84548-wpkv4_calico-apiserver(e1ce5cf6-1384-44ba-ad34-7ba79f4d8925)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06ddadcfb18b3781441650b3e086f1f548b041ba5556b9c0f951ff75a2d59fb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c75d84548-wpkv4" podUID="e1ce5cf6-1384-44ba-ad34-7ba79f4d8925" May 13 23:43:37.646706 containerd[1940]: time="2025-05-13T23:43:37.646555810Z" level=error msg="Failed to destroy network for sandbox \"274297286c7445cc9ac23ae9f2f487a3fbb2d6c894815f97d3cfa6ae9f0ce5a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.649180 containerd[1940]: time="2025-05-13T23:43:37.649086634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m6fkp,Uid:d371daf2-08ec-44bd-92c2-cd5610ab090d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"274297286c7445cc9ac23ae9f2f487a3fbb2d6c894815f97d3cfa6ae9f0ce5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.649532 kubelet[3229]: E0513 23:43:37.649452 3229 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"274297286c7445cc9ac23ae9f2f487a3fbb2d6c894815f97d3cfa6ae9f0ce5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:43:37.649660 kubelet[3229]: E0513 23:43:37.649543 3229 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"274297286c7445cc9ac23ae9f2f487a3fbb2d6c894815f97d3cfa6ae9f0ce5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m6fkp" May 13 23:43:37.649660 kubelet[3229]: E0513 23:43:37.649578 3229 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"274297286c7445cc9ac23ae9f2f487a3fbb2d6c894815f97d3cfa6ae9f0ce5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m6fkp" May 13 23:43:37.649884 kubelet[3229]: E0513 23:43:37.649651 3229 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m6fkp_calico-system(d371daf2-08ec-44bd-92c2-cd5610ab090d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m6fkp_calico-system(d371daf2-08ec-44bd-92c2-cd5610ab090d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"274297286c7445cc9ac23ae9f2f487a3fbb2d6c894815f97d3cfa6ae9f0ce5a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m6fkp" podUID="d371daf2-08ec-44bd-92c2-cd5610ab090d" May 13 23:43:38.191718 systemd[1]: run-netns-cni\x2dc6884289\x2d5666\x2db69d\x2da27b\x2ddc37f0aa19f5.mount: Deactivated successfully. May 13 23:43:38.191896 systemd[1]: run-netns-cni\x2d598cdc1c\x2d805c\x2d579b\x2dcfbb\x2dd5f75eae6e5a.mount: Deactivated successfully. May 13 23:43:38.314002 containerd[1940]: time="2025-05-13T23:43:38.313935946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 23:43:44.100617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029123217.mount: Deactivated successfully. May 13 23:43:44.173437 containerd[1940]: time="2025-05-13T23:43:44.173068887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:44.175357 containerd[1940]: time="2025-05-13T23:43:44.175212195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 23:43:44.177655 containerd[1940]: time="2025-05-13T23:43:44.177547383Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:44.185497 containerd[1940]: time="2025-05-13T23:43:44.185414391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:44.188282 containerd[1940]: time="2025-05-13T23:43:44.187941195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 5.873926901s" May 13 23:43:44.188282 containerd[1940]: time="2025-05-13T23:43:44.188046771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 23:43:44.217862 containerd[1940]: time="2025-05-13T23:43:44.217555119Z" level=info msg="CreateContainer within sandbox \"b0718cdb4426a2661edee8ea588e3af6e9490a0bbf63f2437af4d9ea9dd49566\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:43:44.239004 containerd[1940]: time="2025-05-13T23:43:44.238476051Z" level=info msg="Container 6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:44.260056 containerd[1940]: time="2025-05-13T23:43:44.259963587Z" level=info msg="CreateContainer within sandbox \"b0718cdb4426a2661edee8ea588e3af6e9490a0bbf63f2437af4d9ea9dd49566\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae\"" May 13 23:43:44.261295 containerd[1940]: time="2025-05-13T23:43:44.261215355Z" level=info msg="StartContainer for \"6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae\"" May 13 23:43:44.264900 containerd[1940]: time="2025-05-13T23:43:44.264841479Z" level=info msg="connecting to shim 6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae" address="unix:///run/containerd/s/119206a1aaaca3df7d5a309b3b97a4364e13cd53d8a9b01caf2f281012eaa32e" protocol=ttrpc version=3 May 13 23:43:44.301558 systemd[1]: Started cri-containerd-6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae.scope - libcontainer container 6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae. May 13 23:43:44.394209 containerd[1940]: time="2025-05-13T23:43:44.392822836Z" level=info msg="StartContainer for \"6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae\" returns successfully" May 13 23:43:44.505288 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 23:43:44.505959 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 23:43:45.382933 kubelet[3229]: I0513 23:43:45.382049 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g4hkn" podStartSLOduration=2.39840018 podStartE2EDuration="19.382028225s" podCreationTimestamp="2025-05-13 23:43:26 +0000 UTC" firstStartedPulling="2025-05-13 23:43:27.207299206 +0000 UTC m=+15.375902117" lastFinishedPulling="2025-05-13 23:43:44.190927263 +0000 UTC m=+32.359530162" observedRunningTime="2025-05-13 23:43:45.376864025 +0000 UTC m=+33.545466948" watchObservedRunningTime="2025-05-13 23:43:45.382028225 +0000 UTC m=+33.550631136" May 13 23:43:45.490410 containerd[1940]: time="2025-05-13T23:43:45.490212149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae\" id:\"4135b9ad20a12f1769906013b910df45d21c334262a0f14d2fd6c141f4655bba\" pid:4294 exit_status:1 exited_at:{seconds:1747179825 nanos:489181733}" May 13 23:43:46.716393 containerd[1940]: time="2025-05-13T23:43:46.716309647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae\" id:\"0824ca651cd76648bd0851aaee4fd8339ac334bf8017c36c511ceb6f0f4e0f9e\" pid:4405 exit_status:1 exited_at:{seconds:1747179826 nanos:715507999}" May 13 23:43:46.726269 kernel: bpftool[4454]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 23:43:47.072139 (udev-worker)[4255]: Network interface NamePolicy= disabled on kernel command line. May 13 23:43:47.081689 systemd-networkd[1860]: vxlan.calico: Link UP May 13 23:43:47.081707 systemd-networkd[1860]: vxlan.calico: Gained carrier May 13 23:43:47.129463 (udev-worker)[4256]: Network interface NamePolicy= disabled on kernel command line. May 13 23:43:48.564580 systemd-networkd[1860]: vxlan.calico: Gained IPv6LL May 13 23:43:49.010940 containerd[1940]: time="2025-05-13T23:43:49.010534351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ds8br,Uid:5a4a3961-4607-4b8f-acb2-d86d095f7cf8,Namespace:kube-system,Attempt:0,}" May 13 23:43:49.010940 containerd[1940]: time="2025-05-13T23:43:49.010534423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c75d84548-wpkv4,Uid:e1ce5cf6-1384-44ba-ad34-7ba79f4d8925,Namespace:calico-apiserver,Attempt:0,}" May 13 23:43:49.377074 systemd-networkd[1860]: calic64e2bd0de6: Link UP May 13 23:43:49.379452 systemd-networkd[1860]: calic64e2bd0de6: Gained carrier May 13 23:43:49.410774 containerd[1940]: 2025-05-13 23:43:49.178 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0 calico-apiserver-7c75d84548- calico-apiserver e1ce5cf6-1384-44ba-ad34-7ba79f4d8925 710 0 2025-05-13 23:43:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c75d84548 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-246 calico-apiserver-7c75d84548-wpkv4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic64e2bd0de6 [] []}} ContainerID="06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-wpkv4" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-" May 13 23:43:49.410774 containerd[1940]: 2025-05-13 23:43:49.178 [INFO][4527] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-wpkv4" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0" May 13 23:43:49.410774 containerd[1940]: 2025-05-13 23:43:49.287 [INFO][4553] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" HandleID="k8s-pod-network.06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" Workload="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0" May 13 23:43:49.411635 containerd[1940]: 2025-05-13 23:43:49.310 [INFO][4553] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" HandleID="k8s-pod-network.06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" Workload="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c5970), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-246", "pod":"calico-apiserver-7c75d84548-wpkv4", "timestamp":"2025-05-13 23:43:49.287126672 +0000 UTC"}, Hostname:"ip-172-31-17-246", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:49.411635 containerd[1940]: 2025-05-13 23:43:49.310 [INFO][4553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:49.411635 containerd[1940]: 2025-05-13 23:43:49.311 [INFO][4553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:49.411635 containerd[1940]: 2025-05-13 23:43:49.311 [INFO][4553] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-246' May 13 23:43:49.411635 containerd[1940]: 2025-05-13 23:43:49.315 [INFO][4553] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" host="ip-172-31-17-246" May 13 23:43:49.411635 containerd[1940]: 2025-05-13 23:43:49.324 [INFO][4553] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-246" May 13 23:43:49.411635 containerd[1940]: 2025-05-13 23:43:49.332 [INFO][4553] ipam/ipam.go 489: Trying affinity for 192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:49.411635 containerd[1940]: 2025-05-13 23:43:49.335 [INFO][4553] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:49.411635 containerd[1940]: 2025-05-13 23:43:49.339 [INFO][4553] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:49.412157 containerd[1940]: 2025-05-13 23:43:49.339 [INFO][4553] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" host="ip-172-31-17-246" May 13 23:43:49.412157 containerd[1940]: 2025-05-13 23:43:49.342 [INFO][4553] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069 May 13 23:43:49.412157 containerd[1940]: 2025-05-13 23:43:49.348 [INFO][4553] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" host="ip-172-31-17-246" May 13 23:43:49.412157 containerd[1940]: 2025-05-13 23:43:49.359 [INFO][4553] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.1/26] block=192.168.37.0/26 handle="k8s-pod-network.06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" host="ip-172-31-17-246" May 13 23:43:49.412157 containerd[1940]: 2025-05-13 23:43:49.359 [INFO][4553] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.1/26] handle="k8s-pod-network.06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" host="ip-172-31-17-246" May 13 23:43:49.412157 containerd[1940]: 2025-05-13 23:43:49.359 [INFO][4553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:49.412157 containerd[1940]: 2025-05-13 23:43:49.359 [INFO][4553] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.1/26] IPv6=[] ContainerID="06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" HandleID="k8s-pod-network.06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" Workload="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0" May 13 23:43:49.413518 containerd[1940]: 2025-05-13 23:43:49.369 [INFO][4527] cni-plugin/k8s.go 386: Populated endpoint ContainerID="06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-wpkv4" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0", GenerateName:"calico-apiserver-7c75d84548-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1ce5cf6-1384-44ba-ad34-7ba79f4d8925", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c75d84548", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-246", ContainerID:"", Pod:"calico-apiserver-7c75d84548-wpkv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic64e2bd0de6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:49.413764 containerd[1940]: 2025-05-13 23:43:49.369 [INFO][4527] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.1/32] ContainerID="06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-wpkv4" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0" May 13 23:43:49.413764 containerd[1940]: 2025-05-13 23:43:49.369 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic64e2bd0de6 ContainerID="06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-wpkv4" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0" May 13 23:43:49.413764 containerd[1940]: 2025-05-13 23:43:49.381 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-wpkv4" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0" May 13 23:43:49.414076 containerd[1940]: 2025-05-13 23:43:49.383 [INFO][4527] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-wpkv4" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0", GenerateName:"calico-apiserver-7c75d84548-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1ce5cf6-1384-44ba-ad34-7ba79f4d8925", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c75d84548", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-246", ContainerID:"06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069", Pod:"calico-apiserver-7c75d84548-wpkv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic64e2bd0de6", MAC:"d2:ac:5e:0a:65:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:49.414203 containerd[1940]: 2025-05-13 23:43:49.404 [INFO][4527] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-wpkv4" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--wpkv4-eth0" May 13 23:43:49.503721 systemd-networkd[1860]: cali359e52287b4: Link UP May 13 23:43:49.506859 systemd-networkd[1860]: cali359e52287b4: Gained carrier May 13 23:43:49.555185 containerd[1940]: 2025-05-13 23:43:49.178 [INFO][4535] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0 coredns-6f6b679f8f- kube-system 5a4a3961-4607-4b8f-acb2-d86d095f7cf8 707 0 2025-05-13 23:43:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-246 coredns-6f6b679f8f-ds8br eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali359e52287b4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ds8br" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-" May 13 23:43:49.555185 containerd[1940]: 2025-05-13 23:43:49.178 [INFO][4535] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ds8br" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0" May 13 23:43:49.555185 containerd[1940]: 2025-05-13 23:43:49.298 [INFO][4558] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" HandleID="k8s-pod-network.4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" Workload="ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0" May 13 23:43:49.555507 containerd[1940]: 2025-05-13 23:43:49.326 [INFO][4558] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" HandleID="k8s-pod-network.4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" Workload="ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003eade0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-246", "pod":"coredns-6f6b679f8f-ds8br", "timestamp":"2025-05-13 23:43:49.298937504 +0000 UTC"}, Hostname:"ip-172-31-17-246", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:49.555507 containerd[1940]: 2025-05-13 23:43:49.326 [INFO][4558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:49.555507 containerd[1940]: 2025-05-13 23:43:49.360 [INFO][4558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:49.555507 containerd[1940]: 2025-05-13 23:43:49.360 [INFO][4558] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-246' May 13 23:43:49.555507 containerd[1940]: 2025-05-13 23:43:49.418 [INFO][4558] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" host="ip-172-31-17-246" May 13 23:43:49.555507 containerd[1940]: 2025-05-13 23:43:49.429 [INFO][4558] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-246" May 13 23:43:49.555507 containerd[1940]: 2025-05-13 23:43:49.438 [INFO][4558] ipam/ipam.go 489: Trying affinity for 192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:49.555507 containerd[1940]: 2025-05-13 23:43:49.443 [INFO][4558] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:49.555507 containerd[1940]: 2025-05-13 23:43:49.450 [INFO][4558] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:49.556060 containerd[1940]: 2025-05-13 23:43:49.451 [INFO][4558] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" host="ip-172-31-17-246" May 13 23:43:49.556060 containerd[1940]: 2025-05-13 23:43:49.453 [INFO][4558] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd May 13 23:43:49.556060 containerd[1940]: 2025-05-13 23:43:49.463 [INFO][4558] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" host="ip-172-31-17-246" May 13 23:43:49.556060 containerd[1940]: 2025-05-13 23:43:49.482 [INFO][4558] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.2/26] block=192.168.37.0/26 handle="k8s-pod-network.4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" host="ip-172-31-17-246" May 13 23:43:49.556060 containerd[1940]: 2025-05-13 23:43:49.483 [INFO][4558] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.2/26] handle="k8s-pod-network.4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" host="ip-172-31-17-246" May 13 23:43:49.556060 containerd[1940]: 2025-05-13 23:43:49.483 [INFO][4558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:49.556060 containerd[1940]: 2025-05-13 23:43:49.483 [INFO][4558] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.2/26] IPv6=[] ContainerID="4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" HandleID="k8s-pod-network.4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" Workload="ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0" May 13 23:43:49.557111 containerd[1940]: 2025-05-13 23:43:49.493 [INFO][4535] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ds8br" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5a4a3961-4607-4b8f-acb2-d86d095f7cf8", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-246", ContainerID:"", Pod:"coredns-6f6b679f8f-ds8br", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali359e52287b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:49.557313 containerd[1940]: 2025-05-13 23:43:49.494 [INFO][4535] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.2/32] ContainerID="4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ds8br" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0" May 13 23:43:49.557313 containerd[1940]: 2025-05-13 23:43:49.494 [INFO][4535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali359e52287b4 ContainerID="4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ds8br" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0" May 13 23:43:49.557313 containerd[1940]: 2025-05-13 23:43:49.507 [INFO][4535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ds8br" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0" May 13 23:43:49.557480 containerd[1940]: 2025-05-13 23:43:49.511 [INFO][4535] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ds8br" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5a4a3961-4607-4b8f-acb2-d86d095f7cf8", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-246", ContainerID:"4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd", Pod:"coredns-6f6b679f8f-ds8br", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali359e52287b4", MAC:"f2:e1:36:d6:dd:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:49.557480 containerd[1940]: 2025-05-13 23:43:49.549 [INFO][4535] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" Namespace="kube-system" Pod="coredns-6f6b679f8f-ds8br" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--ds8br-eth0" May 13 23:43:49.583913 containerd[1940]: time="2025-05-13T23:43:49.583299970Z" level=info msg="connecting to shim 06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069" address="unix:///run/containerd/s/cc4e68cf91db0ace9357205b5806cf1113748d9b656cd7cbec44e9963b27de4c" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:49.642577 systemd[1]: Started cri-containerd-06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069.scope - libcontainer container 06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069. May 13 23:43:49.677979 containerd[1940]: time="2025-05-13T23:43:49.677763142Z" level=info msg="connecting to shim 4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd" address="unix:///run/containerd/s/b439951118acdc20fdaf3e676613a63512ebda4c2d68d17f55e1774dc574b2bc" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:49.728808 systemd[1]: Started cri-containerd-4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd.scope - libcontainer container 4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd. May 13 23:43:49.765398 containerd[1940]: time="2025-05-13T23:43:49.765331246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c75d84548-wpkv4,Uid:e1ce5cf6-1384-44ba-ad34-7ba79f4d8925,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069\"" May 13 23:43:49.772352 containerd[1940]: time="2025-05-13T23:43:49.770463586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:43:49.843766 containerd[1940]: time="2025-05-13T23:43:49.843680855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ds8br,Uid:5a4a3961-4607-4b8f-acb2-d86d095f7cf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd\"" May 13 23:43:49.850315 containerd[1940]: time="2025-05-13T23:43:49.850202807Z" level=info msg="CreateContainer within sandbox \"4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:43:49.870653 containerd[1940]: time="2025-05-13T23:43:49.870412655Z" level=info msg="Container bbc735b3ca9ae7cc7c72a3a65b4beb3df76cb72fa98ee84b3f1cd441127a9c64: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:49.883102 containerd[1940]: time="2025-05-13T23:43:49.882971363Z" level=info msg="CreateContainer within sandbox \"4a60d8616ac1f74bff5ed16916221cf166d9796e55ef6b2add5e37e1f535ecfd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bbc735b3ca9ae7cc7c72a3a65b4beb3df76cb72fa98ee84b3f1cd441127a9c64\"" May 13 23:43:49.884623 containerd[1940]: time="2025-05-13T23:43:49.884552975Z" level=info msg="StartContainer for \"bbc735b3ca9ae7cc7c72a3a65b4beb3df76cb72fa98ee84b3f1cd441127a9c64\"" May 13 23:43:49.886547 containerd[1940]: time="2025-05-13T23:43:49.886471283Z" level=info msg="connecting to shim bbc735b3ca9ae7cc7c72a3a65b4beb3df76cb72fa98ee84b3f1cd441127a9c64" address="unix:///run/containerd/s/b439951118acdc20fdaf3e676613a63512ebda4c2d68d17f55e1774dc574b2bc" protocol=ttrpc version=3 May 13 23:43:49.922527 systemd[1]: Started cri-containerd-bbc735b3ca9ae7cc7c72a3a65b4beb3df76cb72fa98ee84b3f1cd441127a9c64.scope - libcontainer container bbc735b3ca9ae7cc7c72a3a65b4beb3df76cb72fa98ee84b3f1cd441127a9c64. May 13 23:43:49.988619 containerd[1940]: time="2025-05-13T23:43:49.988454160Z" level=info msg="StartContainer for \"bbc735b3ca9ae7cc7c72a3a65b4beb3df76cb72fa98ee84b3f1cd441127a9c64\" returns successfully" May 13 23:43:50.382253 kubelet[3229]: I0513 23:43:50.381412 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ds8br" podStartSLOduration=34.381387117 podStartE2EDuration="34.381387117s" podCreationTimestamp="2025-05-13 23:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:50.380076981 +0000 UTC m=+38.548679940" watchObservedRunningTime="2025-05-13 23:43:50.381387117 +0000 UTC m=+38.549990040" May 13 23:43:50.549067 systemd-networkd[1860]: cali359e52287b4: Gained IPv6LL May 13 23:43:50.996685 systemd-networkd[1860]: calic64e2bd0de6: Gained IPv6LL May 13 23:43:51.011391 containerd[1940]: time="2025-05-13T23:43:51.010758933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c75d84548-n6glm,Uid:cda0cd73-2f14-479a-a37f-508f0d07bf98,Namespace:calico-apiserver,Attempt:0,}" May 13 23:43:51.011391 containerd[1940]: time="2025-05-13T23:43:51.011127237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r74k7,Uid:26776fd4-9f96-41d9-b20a-4c6815a3c6a5,Namespace:kube-system,Attempt:0,}" May 13 23:43:51.323145 systemd-networkd[1860]: calic02afa6d08f: Link UP May 13 23:43:51.323986 systemd-networkd[1860]: calic02afa6d08f: Gained carrier May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.155 [INFO][4720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0 calico-apiserver-7c75d84548- calico-apiserver cda0cd73-2f14-479a-a37f-508f0d07bf98 709 0 2025-05-13 23:43:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c75d84548 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-246 calico-apiserver-7c75d84548-n6glm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic02afa6d08f [] []}} ContainerID="12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-n6glm" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-" May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.155 [INFO][4720] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-n6glm" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0" May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.244 [INFO][4744] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" HandleID="k8s-pod-network.12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" Workload="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0" May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.263 [INFO][4744] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" HandleID="k8s-pod-network.12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" Workload="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000407b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-246", "pod":"calico-apiserver-7c75d84548-n6glm", "timestamp":"2025-05-13 23:43:51.244489738 +0000 UTC"}, Hostname:"ip-172-31-17-246", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.263 [INFO][4744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.264 [INFO][4744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.264 [INFO][4744] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-246' May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.270 [INFO][4744] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" host="ip-172-31-17-246" May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.277 [INFO][4744] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-246" May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.285 [INFO][4744] ipam/ipam.go 489: Trying affinity for 192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.288 [INFO][4744] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.291 [INFO][4744] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.292 [INFO][4744] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" host="ip-172-31-17-246" May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.294 [INFO][4744] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.301 [INFO][4744] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" host="ip-172-31-17-246" May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.312 [INFO][4744] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.3/26] block=192.168.37.0/26 handle="k8s-pod-network.12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" host="ip-172-31-17-246" May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.313 [INFO][4744] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.3/26] handle="k8s-pod-network.12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" host="ip-172-31-17-246" May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.313 [INFO][4744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:51.357744 containerd[1940]: 2025-05-13 23:43:51.313 [INFO][4744] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.3/26] IPv6=[] ContainerID="12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" HandleID="k8s-pod-network.12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" Workload="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0" May 13 23:43:51.360659 containerd[1940]: 2025-05-13 23:43:51.317 [INFO][4720] cni-plugin/k8s.go 386: Populated endpoint ContainerID="12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-n6glm" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0", GenerateName:"calico-apiserver-7c75d84548-", Namespace:"calico-apiserver", SelfLink:"", UID:"cda0cd73-2f14-479a-a37f-508f0d07bf98", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c75d84548", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-246", ContainerID:"", Pod:"calico-apiserver-7c75d84548-n6glm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic02afa6d08f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:51.360659 containerd[1940]: 2025-05-13 23:43:51.317 [INFO][4720] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.3/32] ContainerID="12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-n6glm" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0" May 13 23:43:51.360659 containerd[1940]: 2025-05-13 23:43:51.317 [INFO][4720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic02afa6d08f ContainerID="12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-n6glm" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0" May 13 23:43:51.360659 containerd[1940]: 2025-05-13 23:43:51.322 [INFO][4720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-n6glm" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0" May 13 23:43:51.360659 containerd[1940]: 2025-05-13 23:43:51.327 [INFO][4720] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-n6glm" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0", GenerateName:"calico-apiserver-7c75d84548-", Namespace:"calico-apiserver", SelfLink:"", UID:"cda0cd73-2f14-479a-a37f-508f0d07bf98", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c75d84548", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-246", ContainerID:"12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a", Pod:"calico-apiserver-7c75d84548-n6glm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic02afa6d08f", MAC:"ce:8b:da:c1:75:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:51.360659 containerd[1940]: 2025-05-13 23:43:51.350 [INFO][4720] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" Namespace="calico-apiserver" Pod="calico-apiserver-7c75d84548-n6glm" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--apiserver--7c75d84548--n6glm-eth0" May 13 23:43:51.438730 containerd[1940]: time="2025-05-13T23:43:51.438497435Z" level=info msg="connecting to shim 12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a" address="unix:///run/containerd/s/90b0345cbeb37d2cfbcdbec74955a10455de988f7c82acdac7d25b46f6673543" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:51.480532 systemd-networkd[1860]: calie19e43cac31: Link UP May 13 23:43:51.484607 systemd-networkd[1860]: calie19e43cac31: Gained carrier May 13 23:43:51.519601 systemd[1]: Started cri-containerd-12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a.scope - libcontainer container 12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a. May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.173 [INFO][4727] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0 coredns-6f6b679f8f- kube-system 26776fd4-9f96-41d9-b20a-4c6815a3c6a5 711 0 2025-05-13 23:43:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-246 coredns-6f6b679f8f-r74k7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie19e43cac31 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" Namespace="kube-system" Pod="coredns-6f6b679f8f-r74k7" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-" May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.173 [INFO][4727] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" Namespace="kube-system" Pod="coredns-6f6b679f8f-r74k7" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0" May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.251 [INFO][4749] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" HandleID="k8s-pod-network.e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" Workload="ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0" May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.271 [INFO][4749] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" HandleID="k8s-pod-network.e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" Workload="ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed920), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-246", "pod":"coredns-6f6b679f8f-r74k7", "timestamp":"2025-05-13 23:43:51.251813218 +0000 UTC"}, Hostname:"ip-172-31-17-246", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.271 [INFO][4749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.313 [INFO][4749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.313 [INFO][4749] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-246' May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.371 [INFO][4749] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" host="ip-172-31-17-246" May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.383 [INFO][4749] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-246" May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.404 [INFO][4749] ipam/ipam.go 489: Trying affinity for 192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.412 [INFO][4749] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.418 [INFO][4749] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.419 [INFO][4749] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" host="ip-172-31-17-246" May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.424 [INFO][4749] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.435 [INFO][4749] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" host="ip-172-31-17-246" May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.452 [INFO][4749] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.4/26] block=192.168.37.0/26 handle="k8s-pod-network.e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" host="ip-172-31-17-246" May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.454 [INFO][4749] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.4/26] handle="k8s-pod-network.e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" host="ip-172-31-17-246" May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.454 [INFO][4749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:51.532904 containerd[1940]: 2025-05-13 23:43:51.455 [INFO][4749] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.4/26] IPv6=[] ContainerID="e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" HandleID="k8s-pod-network.e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" Workload="ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0" May 13 23:43:51.535047 containerd[1940]: 2025-05-13 23:43:51.467 [INFO][4727] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" Namespace="kube-system" Pod="coredns-6f6b679f8f-r74k7" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"26776fd4-9f96-41d9-b20a-4c6815a3c6a5", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-246", ContainerID:"", Pod:"coredns-6f6b679f8f-r74k7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie19e43cac31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:51.535047 containerd[1940]: 2025-05-13 23:43:51.468 [INFO][4727] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.4/32] ContainerID="e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" Namespace="kube-system" Pod="coredns-6f6b679f8f-r74k7" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0" May 13 23:43:51.535047 containerd[1940]: 2025-05-13 23:43:51.469 [INFO][4727] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie19e43cac31 ContainerID="e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" Namespace="kube-system" Pod="coredns-6f6b679f8f-r74k7" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0" May 13 23:43:51.535047 containerd[1940]: 2025-05-13 23:43:51.495 [INFO][4727] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" Namespace="kube-system" Pod="coredns-6f6b679f8f-r74k7" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0" May 13 23:43:51.535047 containerd[1940]: 2025-05-13 23:43:51.497 [INFO][4727] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" Namespace="kube-system" Pod="coredns-6f6b679f8f-r74k7" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"26776fd4-9f96-41d9-b20a-4c6815a3c6a5", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-246", ContainerID:"e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c", Pod:"coredns-6f6b679f8f-r74k7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie19e43cac31", MAC:"5a:c8:e5:67:d1:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:51.535047 containerd[1940]: 2025-05-13 23:43:51.521 [INFO][4727] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" Namespace="kube-system" Pod="coredns-6f6b679f8f-r74k7" WorkloadEndpoint="ip--172--31--17--246-k8s-coredns--6f6b679f8f--r74k7-eth0" May 13 23:43:51.611243 containerd[1940]: time="2025-05-13T23:43:51.611138280Z" level=info msg="connecting to shim e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c" address="unix:///run/containerd/s/d17d74cae5ea8e27d2173ceeca2bdcaaa164938879377aa3f894ea66c943443d" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:51.682571 systemd[1]: Started cri-containerd-e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c.scope - libcontainer container e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c. May 13 23:43:51.686407 containerd[1940]: time="2025-05-13T23:43:51.686300148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c75d84548-n6glm,Uid:cda0cd73-2f14-479a-a37f-508f0d07bf98,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a\"" May 13 23:43:51.768484 containerd[1940]: time="2025-05-13T23:43:51.768296808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r74k7,Uid:26776fd4-9f96-41d9-b20a-4c6815a3c6a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c\"" May 13 23:43:51.779310 containerd[1940]: time="2025-05-13T23:43:51.779206488Z" level=info msg="CreateContainer within sandbox \"e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:43:51.796119 containerd[1940]: time="2025-05-13T23:43:51.796045188Z" level=info msg="Container 420c8eae97a5d41d80fc58e4fd9be6aabc413ed8346682972026d0600fdfb958: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:51.807857 containerd[1940]: time="2025-05-13T23:43:51.807786121Z" level=info msg="CreateContainer within sandbox \"e2e505457c8eaff933626c3211d0db09f8caab81bd95af8265371586566e843c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"420c8eae97a5d41d80fc58e4fd9be6aabc413ed8346682972026d0600fdfb958\"" May 13 23:43:51.809561 containerd[1940]: time="2025-05-13T23:43:51.809266393Z" level=info msg="StartContainer for \"420c8eae97a5d41d80fc58e4fd9be6aabc413ed8346682972026d0600fdfb958\"" May 13 23:43:51.811841 containerd[1940]: time="2025-05-13T23:43:51.811755625Z" level=info msg="connecting to shim 420c8eae97a5d41d80fc58e4fd9be6aabc413ed8346682972026d0600fdfb958" address="unix:///run/containerd/s/d17d74cae5ea8e27d2173ceeca2bdcaaa164938879377aa3f894ea66c943443d" protocol=ttrpc version=3 May 13 23:43:51.847530 systemd[1]: Started cri-containerd-420c8eae97a5d41d80fc58e4fd9be6aabc413ed8346682972026d0600fdfb958.scope - libcontainer container 420c8eae97a5d41d80fc58e4fd9be6aabc413ed8346682972026d0600fdfb958. May 13 23:43:51.910671 containerd[1940]: time="2025-05-13T23:43:51.910266025Z" level=info msg="StartContainer for \"420c8eae97a5d41d80fc58e4fd9be6aabc413ed8346682972026d0600fdfb958\" returns successfully" May 13 23:43:52.010982 containerd[1940]: time="2025-05-13T23:43:52.010723882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m6fkp,Uid:d371daf2-08ec-44bd-92c2-cd5610ab090d,Namespace:calico-system,Attempt:0,}" May 13 23:43:52.254728 systemd-networkd[1860]: calicfe0fa2cc1e: Link UP May 13 23:43:52.255583 systemd-networkd[1860]: calicfe0fa2cc1e: Gained carrier May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.099 [INFO][4906] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0 csi-node-driver- calico-system d371daf2-08ec-44bd-92c2-cd5610ab090d 619 0 2025-05-13 23:43:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-246 csi-node-driver-m6fkp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicfe0fa2cc1e [] []}} ContainerID="02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" Namespace="calico-system" Pod="csi-node-driver-m6fkp" WorkloadEndpoint="ip--172--31--17--246-k8s-csi--node--driver--m6fkp-" May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.100 [INFO][4906] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" Namespace="calico-system" Pod="csi-node-driver-m6fkp" WorkloadEndpoint="ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0" May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.150 [INFO][4919] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" HandleID="k8s-pod-network.02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" Workload="ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0" May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.168 [INFO][4919] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" HandleID="k8s-pod-network.02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" Workload="ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332a10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-246", "pod":"csi-node-driver-m6fkp", "timestamp":"2025-05-13 23:43:52.150756718 +0000 UTC"}, Hostname:"ip-172-31-17-246", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.168 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.168 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.168 [INFO][4919] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-246' May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.171 [INFO][4919] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" host="ip-172-31-17-246" May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.179 [INFO][4919] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-246" May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.198 [INFO][4919] ipam/ipam.go 489: Trying affinity for 192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.205 [INFO][4919] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.211 [INFO][4919] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.212 [INFO][4919] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" host="ip-172-31-17-246" May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.216 [INFO][4919] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82 May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.225 [INFO][4919] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" host="ip-172-31-17-246" May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.245 [INFO][4919] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.5/26] block=192.168.37.0/26 handle="k8s-pod-network.02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" host="ip-172-31-17-246" May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.245 [INFO][4919] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.5/26] handle="k8s-pod-network.02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" host="ip-172-31-17-246" May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.245 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:52.281344 containerd[1940]: 2025-05-13 23:43:52.245 [INFO][4919] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.5/26] IPv6=[] ContainerID="02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" HandleID="k8s-pod-network.02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" Workload="ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0" May 13 23:43:52.282803 containerd[1940]: 2025-05-13 23:43:52.248 [INFO][4906] cni-plugin/k8s.go 386: Populated endpoint ContainerID="02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" Namespace="calico-system" Pod="csi-node-driver-m6fkp" WorkloadEndpoint="ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d371daf2-08ec-44bd-92c2-cd5610ab090d", ResourceVersion:"619", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-246", ContainerID:"", Pod:"csi-node-driver-m6fkp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicfe0fa2cc1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:52.282803 containerd[1940]: 2025-05-13 23:43:52.248 [INFO][4906] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.5/32] ContainerID="02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" Namespace="calico-system" Pod="csi-node-driver-m6fkp" WorkloadEndpoint="ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0" May 13 23:43:52.282803 containerd[1940]: 2025-05-13 23:43:52.248 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfe0fa2cc1e ContainerID="02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" Namespace="calico-system" Pod="csi-node-driver-m6fkp" WorkloadEndpoint="ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0" May 13 23:43:52.282803 containerd[1940]: 2025-05-13 23:43:52.254 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" Namespace="calico-system" Pod="csi-node-driver-m6fkp" WorkloadEndpoint="ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0" May 13 23:43:52.282803 containerd[1940]: 2025-05-13 23:43:52.256 [INFO][4906] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" Namespace="calico-system" Pod="csi-node-driver-m6fkp" WorkloadEndpoint="ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d371daf2-08ec-44bd-92c2-cd5610ab090d", ResourceVersion:"619", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-246", ContainerID:"02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82", Pod:"csi-node-driver-m6fkp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicfe0fa2cc1e", MAC:"82:88:27:64:62:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:52.282803 containerd[1940]: 2025-05-13 23:43:52.276 [INFO][4906] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" Namespace="calico-system" Pod="csi-node-driver-m6fkp" WorkloadEndpoint="ip--172--31--17--246-k8s-csi--node--driver--m6fkp-eth0" May 13 23:43:52.366912 containerd[1940]: time="2025-05-13T23:43:52.366636659Z" level=info msg="connecting to shim 02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82" address="unix:///run/containerd/s/b70ba6afc0789d5f648bbfdda9b3ff54d6a8bbda750e9d2507e575b18fef8c70" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:52.505626 systemd[1]: Started cri-containerd-02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82.scope - libcontainer container 02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82. May 13 23:43:52.520267 kubelet[3229]: I0513 23:43:52.519866 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-r74k7" podStartSLOduration=36.519841992 podStartE2EDuration="36.519841992s" podCreationTimestamp="2025-05-13 23:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:43:52.462491976 +0000 UTC m=+40.631094899" watchObservedRunningTime="2025-05-13 23:43:52.519841992 +0000 UTC m=+40.688444891" May 13 23:43:52.656538 containerd[1940]: time="2025-05-13T23:43:52.656403637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m6fkp,Uid:d371daf2-08ec-44bd-92c2-cd5610ab090d,Namespace:calico-system,Attempt:0,} returns sandbox id \"02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82\"" May 13 23:43:52.916593 systemd-networkd[1860]: calie19e43cac31: Gained IPv6LL May 13 23:43:53.010288 containerd[1940]: time="2025-05-13T23:43:53.010115063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-845cb5b5b9-57b2c,Uid:cdca3ef4-4276-4570-a8a4-faa7c5ce116d,Namespace:calico-system,Attempt:0,}" May 13 23:43:53.045330 systemd-networkd[1860]: calic02afa6d08f: Gained IPv6LL May 13 23:43:53.232214 systemd-networkd[1860]: cali358da77d101: Link UP May 13 23:43:53.233408 systemd-networkd[1860]: cali358da77d101: Gained carrier May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.089 [INFO][4993] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0 calico-kube-controllers-845cb5b5b9- calico-system cdca3ef4-4276-4570-a8a4-faa7c5ce116d 708 0 2025-05-13 23:43:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:845cb5b5b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-246 calico-kube-controllers-845cb5b5b9-57b2c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali358da77d101 [] []}} ContainerID="656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" Namespace="calico-system" Pod="calico-kube-controllers-845cb5b5b9-57b2c" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-" May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.089 [INFO][4993] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" Namespace="calico-system" Pod="calico-kube-controllers-845cb5b5b9-57b2c" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0" May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.143 [INFO][5004] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" HandleID="k8s-pod-network.656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" Workload="ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0" May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.163 [INFO][5004] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" HandleID="k8s-pod-network.656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" Workload="ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000223110), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-246", "pod":"calico-kube-controllers-845cb5b5b9-57b2c", "timestamp":"2025-05-13 23:43:53.143463299 +0000 UTC"}, Hostname:"ip-172-31-17-246", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.164 [INFO][5004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.164 [INFO][5004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.164 [INFO][5004] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-246' May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.168 [INFO][5004] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" host="ip-172-31-17-246" May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.174 [INFO][5004] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-246" May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.186 [INFO][5004] ipam/ipam.go 489: Trying affinity for 192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.190 [INFO][5004] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.196 [INFO][5004] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ip-172-31-17-246" May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.197 [INFO][5004] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" host="ip-172-31-17-246" May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.202 [INFO][5004] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731 May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.212 [INFO][5004] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" host="ip-172-31-17-246" May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.223 [INFO][5004] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.6/26] block=192.168.37.0/26 handle="k8s-pod-network.656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" host="ip-172-31-17-246" May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.224 [INFO][5004] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.6/26] handle="k8s-pod-network.656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" host="ip-172-31-17-246" May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.224 [INFO][5004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:43:53.260806 containerd[1940]: 2025-05-13 23:43:53.224 [INFO][5004] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.6/26] IPv6=[] ContainerID="656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" HandleID="k8s-pod-network.656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" Workload="ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0" May 13 23:43:53.264090 containerd[1940]: 2025-05-13 23:43:53.227 [INFO][4993] cni-plugin/k8s.go 386: Populated endpoint ContainerID="656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" Namespace="calico-system" Pod="calico-kube-controllers-845cb5b5b9-57b2c" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0", GenerateName:"calico-kube-controllers-845cb5b5b9-", Namespace:"calico-system", SelfLink:"", UID:"cdca3ef4-4276-4570-a8a4-faa7c5ce116d", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"845cb5b5b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-246", ContainerID:"", Pod:"calico-kube-controllers-845cb5b5b9-57b2c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali358da77d101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:53.264090 containerd[1940]: 2025-05-13 23:43:53.228 [INFO][4993] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.6/32] ContainerID="656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" Namespace="calico-system" Pod="calico-kube-controllers-845cb5b5b9-57b2c" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0" May 13 23:43:53.264090 containerd[1940]: 2025-05-13 23:43:53.228 [INFO][4993] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali358da77d101 ContainerID="656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" Namespace="calico-system" Pod="calico-kube-controllers-845cb5b5b9-57b2c" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0" May 13 23:43:53.264090 containerd[1940]: 2025-05-13 23:43:53.232 [INFO][4993] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" Namespace="calico-system" Pod="calico-kube-controllers-845cb5b5b9-57b2c" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0" May 13 23:43:53.264090 containerd[1940]: 2025-05-13 23:43:53.233 [INFO][4993] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" Namespace="calico-system" Pod="calico-kube-controllers-845cb5b5b9-57b2c" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0", GenerateName:"calico-kube-controllers-845cb5b5b9-", Namespace:"calico-system", SelfLink:"", UID:"cdca3ef4-4276-4570-a8a4-faa7c5ce116d", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 43, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"845cb5b5b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-246", ContainerID:"656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731", Pod:"calico-kube-controllers-845cb5b5b9-57b2c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali358da77d101", MAC:"ea:69:5b:ac:34:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:43:53.264090 containerd[1940]: 2025-05-13 23:43:53.255 [INFO][4993] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" Namespace="calico-system" Pod="calico-kube-controllers-845cb5b5b9-57b2c" WorkloadEndpoint="ip--172--31--17--246-k8s-calico--kube--controllers--845cb5b5b9--57b2c-eth0" May 13 23:43:53.313667 containerd[1940]: time="2025-05-13T23:43:53.313402464Z" level=info msg="connecting to shim 656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731" address="unix:///run/containerd/s/970ddb0c3bc5306fbd52ddab1dc22adbd514f583d1a0da6c984cc4e14ca55167" namespace=k8s.io protocol=ttrpc version=3 May 13 23:43:53.368577 systemd[1]: Started cri-containerd-656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731.scope - libcontainer container 656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731. May 13 23:43:53.465614 containerd[1940]: time="2025-05-13T23:43:53.465528877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-845cb5b5b9-57b2c,Uid:cdca3ef4-4276-4570-a8a4-faa7c5ce116d,Namespace:calico-system,Attempt:0,} returns sandbox id \"656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731\"" May 13 23:43:53.748622 systemd-networkd[1860]: calicfe0fa2cc1e: Gained IPv6LL May 13 23:43:55.156589 systemd-networkd[1860]: cali358da77d101: Gained IPv6LL May 13 23:43:55.549643 containerd[1940]: time="2025-05-13T23:43:55.549476739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:55.552581 containerd[1940]: time="2025-05-13T23:43:55.552477435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 23:43:55.555480 containerd[1940]: time="2025-05-13T23:43:55.555396699Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:55.560791 containerd[1940]: time="2025-05-13T23:43:55.560461575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:55.563999 containerd[1940]: time="2025-05-13T23:43:55.562893399Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 5.792320613s" May 13 23:43:55.563999 containerd[1940]: time="2025-05-13T23:43:55.562957143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 23:43:55.578485 containerd[1940]: time="2025-05-13T23:43:55.578358327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:43:55.593353 containerd[1940]: time="2025-05-13T23:43:55.593279331Z" level=info msg="CreateContainer within sandbox \"06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:43:55.616399 containerd[1940]: time="2025-05-13T23:43:55.616314651Z" level=info msg="Container 8028956a870f0329a7fd437d1ac7e38f2efb039d1cd51b3c7795c86fb2bff43a: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:55.641571 containerd[1940]: time="2025-05-13T23:43:55.641092528Z" level=info msg="CreateContainer within sandbox \"06414402a261683b2846f9f52c527669b1e2dadd37be0e3fcf7d8dbd4243d069\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8028956a870f0329a7fd437d1ac7e38f2efb039d1cd51b3c7795c86fb2bff43a\"" May 13 23:43:55.654031 containerd[1940]: time="2025-05-13T23:43:55.653658244Z" level=info msg="StartContainer for \"8028956a870f0329a7fd437d1ac7e38f2efb039d1cd51b3c7795c86fb2bff43a\"" May 13 23:43:55.661671 containerd[1940]: time="2025-05-13T23:43:55.661593664Z" level=info msg="connecting to shim 8028956a870f0329a7fd437d1ac7e38f2efb039d1cd51b3c7795c86fb2bff43a" address="unix:///run/containerd/s/cc4e68cf91db0ace9357205b5806cf1113748d9b656cd7cbec44e9963b27de4c" protocol=ttrpc version=3 May 13 23:43:55.711416 systemd[1]: Started cri-containerd-8028956a870f0329a7fd437d1ac7e38f2efb039d1cd51b3c7795c86fb2bff43a.scope - libcontainer container 8028956a870f0329a7fd437d1ac7e38f2efb039d1cd51b3c7795c86fb2bff43a. May 13 23:43:55.794073 containerd[1940]: time="2025-05-13T23:43:55.793999900Z" level=info msg="StartContainer for \"8028956a870f0329a7fd437d1ac7e38f2efb039d1cd51b3c7795c86fb2bff43a\" returns successfully" May 13 23:43:55.915885 containerd[1940]: time="2025-05-13T23:43:55.915815693Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:55.919127 containerd[1940]: time="2025-05-13T23:43:55.919047845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 23:43:55.922514 containerd[1940]: time="2025-05-13T23:43:55.922426673Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 343.88543ms" May 13 23:43:55.922514 containerd[1940]: time="2025-05-13T23:43:55.922508525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 23:43:55.926638 containerd[1940]: time="2025-05-13T23:43:55.926496533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 23:43:55.938254 containerd[1940]: time="2025-05-13T23:43:55.937320425Z" level=info msg="CreateContainer within sandbox \"12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:43:55.959213 containerd[1940]: time="2025-05-13T23:43:55.958325957Z" level=info msg="Container ffbea00c95f24fbf504c5db8fb399b8e88c7140334b39774b586cce8af9762c9: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:55.982417 containerd[1940]: time="2025-05-13T23:43:55.982348709Z" level=info msg="CreateContainer within sandbox \"12f8aa63b8e377fd29614990d1cffbf7d98ffd82675ca85ad9ce921f7f44707a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ffbea00c95f24fbf504c5db8fb399b8e88c7140334b39774b586cce8af9762c9\"" May 13 23:43:55.983278 containerd[1940]: time="2025-05-13T23:43:55.983206781Z" level=info msg="StartContainer for \"ffbea00c95f24fbf504c5db8fb399b8e88c7140334b39774b586cce8af9762c9\"" May 13 23:43:55.990374 containerd[1940]: time="2025-05-13T23:43:55.990194309Z" level=info msg="connecting to shim ffbea00c95f24fbf504c5db8fb399b8e88c7140334b39774b586cce8af9762c9" address="unix:///run/containerd/s/90b0345cbeb37d2cfbcdbec74955a10455de988f7c82acdac7d25b46f6673543" protocol=ttrpc version=3 May 13 23:43:56.031543 systemd[1]: Started cri-containerd-ffbea00c95f24fbf504c5db8fb399b8e88c7140334b39774b586cce8af9762c9.scope - libcontainer container ffbea00c95f24fbf504c5db8fb399b8e88c7140334b39774b586cce8af9762c9. May 13 23:43:56.182735 containerd[1940]: time="2025-05-13T23:43:56.182434718Z" level=info msg="StartContainer for \"ffbea00c95f24fbf504c5db8fb399b8e88c7140334b39774b586cce8af9762c9\" returns successfully" May 13 23:43:56.521590 kubelet[3229]: I0513 23:43:56.521379 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c75d84548-wpkv4" podStartSLOduration=26.714343367 podStartE2EDuration="32.521351668s" podCreationTimestamp="2025-05-13 23:43:24 +0000 UTC" firstStartedPulling="2025-05-13 23:43:49.769499182 +0000 UTC m=+37.938102081" lastFinishedPulling="2025-05-13 23:43:55.576507483 +0000 UTC m=+43.745110382" observedRunningTime="2025-05-13 23:43:56.496818424 +0000 UTC m=+44.665421371" watchObservedRunningTime="2025-05-13 23:43:56.521351668 +0000 UTC m=+44.689954567" May 13 23:43:57.480257 kubelet[3229]: I0513 23:43:57.476198 3229 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:43:57.534719 ntpd[1924]: Listen normally on 8 vxlan.calico 192.168.37.0:123 May 13 23:43:57.535572 ntpd[1924]: Listen normally on 9 vxlan.calico [fe80::64ec:e3ff:fe7b:e284%4]:123 May 13 23:43:57.537829 ntpd[1924]: 13 May 23:43:57 ntpd[1924]: Listen normally on 8 vxlan.calico 192.168.37.0:123 May 13 23:43:57.537829 ntpd[1924]: 13 May 23:43:57 ntpd[1924]: Listen normally on 9 vxlan.calico [fe80::64ec:e3ff:fe7b:e284%4]:123 May 13 23:43:57.537829 ntpd[1924]: 13 May 23:43:57 ntpd[1924]: Listen normally on 10 calic64e2bd0de6 [fe80::ecee:eeff:feee:eeee%7]:123 May 13 23:43:57.537829 ntpd[1924]: 13 May 23:43:57 ntpd[1924]: Listen normally on 11 cali359e52287b4 [fe80::ecee:eeff:feee:eeee%8]:123 May 13 23:43:57.537829 ntpd[1924]: 13 May 23:43:57 ntpd[1924]: Listen normally on 12 calic02afa6d08f [fe80::ecee:eeff:feee:eeee%9]:123 May 13 23:43:57.537829 ntpd[1924]: 13 May 23:43:57 ntpd[1924]: Listen normally on 13 calie19e43cac31 [fe80::ecee:eeff:feee:eeee%10]:123 May 13 23:43:57.537829 ntpd[1924]: 13 May 23:43:57 ntpd[1924]: Listen normally on 14 calicfe0fa2cc1e [fe80::ecee:eeff:feee:eeee%11]:123 May 13 23:43:57.537829 ntpd[1924]: 13 May 23:43:57 ntpd[1924]: Listen normally on 15 cali358da77d101 [fe80::ecee:eeff:feee:eeee%12]:123 May 13 23:43:57.535659 ntpd[1924]: Listen normally on 10 calic64e2bd0de6 [fe80::ecee:eeff:feee:eeee%7]:123 May 13 23:43:57.535734 ntpd[1924]: Listen normally on 11 cali359e52287b4 [fe80::ecee:eeff:feee:eeee%8]:123 May 13 23:43:57.535801 ntpd[1924]: Listen normally on 12 calic02afa6d08f [fe80::ecee:eeff:feee:eeee%9]:123 May 13 23:43:57.535867 ntpd[1924]: Listen normally on 13 calie19e43cac31 [fe80::ecee:eeff:feee:eeee%10]:123 May 13 23:43:57.535956 ntpd[1924]: Listen normally on 14 calicfe0fa2cc1e [fe80::ecee:eeff:feee:eeee%11]:123 May 13 23:43:57.536028 ntpd[1924]: Listen normally on 15 cali358da77d101 [fe80::ecee:eeff:feee:eeee%12]:123 May 13 23:43:57.757483 containerd[1940]: time="2025-05-13T23:43:57.757318974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:57.763764 containerd[1940]: time="2025-05-13T23:43:57.763666098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 23:43:57.768545 containerd[1940]: time="2025-05-13T23:43:57.768205854Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:57.785154 containerd[1940]: time="2025-05-13T23:43:57.783468258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:43:57.785933 containerd[1940]: time="2025-05-13T23:43:57.785829498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.859264325s" May 13 23:43:57.786062 containerd[1940]: time="2025-05-13T23:43:57.785930106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 23:43:57.793125 containerd[1940]: time="2025-05-13T23:43:57.792941562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 23:43:57.799125 containerd[1940]: time="2025-05-13T23:43:57.799054386Z" level=info msg="CreateContainer within sandbox \"02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 23:43:57.855607 containerd[1940]: time="2025-05-13T23:43:57.855532879Z" level=info msg="Container bb5bffb55eab503723b6a370606907048da139b15ba9de626ca5a975d0533516: CDI devices from CRI Config.CDIDevices: []" May 13 23:43:57.879013 containerd[1940]: time="2025-05-13T23:43:57.875979019Z" level=info msg="CreateContainer within sandbox \"02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bb5bffb55eab503723b6a370606907048da139b15ba9de626ca5a975d0533516\"" May 13 23:43:57.882254 containerd[1940]: time="2025-05-13T23:43:57.882038503Z" level=info msg="StartContainer for \"bb5bffb55eab503723b6a370606907048da139b15ba9de626ca5a975d0533516\"" May 13 23:43:57.885655 containerd[1940]: time="2025-05-13T23:43:57.885365995Z" level=info msg="connecting to shim bb5bffb55eab503723b6a370606907048da139b15ba9de626ca5a975d0533516" address="unix:///run/containerd/s/b70ba6afc0789d5f648bbfdda9b3ff54d6a8bbda750e9d2507e575b18fef8c70" protocol=ttrpc version=3 May 13 23:43:57.955047 systemd[1]: Started cri-containerd-bb5bffb55eab503723b6a370606907048da139b15ba9de626ca5a975d0533516.scope - libcontainer container bb5bffb55eab503723b6a370606907048da139b15ba9de626ca5a975d0533516. May 13 23:43:58.107726 containerd[1940]: time="2025-05-13T23:43:58.107012824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae\" id:\"9cecd1544316414ea9bdb0984a85a5d9f9cedf40adc3b2c45d01ecd34e256979\" pid:5164 exited_at:{seconds:1747179838 nanos:105645388}" May 13 23:43:58.146688 kubelet[3229]: I0513 23:43:58.146576 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c75d84548-n6glm" podStartSLOduration=29.909693207 podStartE2EDuration="34.14655232s" podCreationTimestamp="2025-05-13 23:43:24 +0000 UTC" firstStartedPulling="2025-05-13 23:43:51.689990688 +0000 UTC m=+39.858593599" lastFinishedPulling="2025-05-13 23:43:55.926849729 +0000 UTC m=+44.095452712" observedRunningTime="2025-05-13 23:43:56.52309288 +0000 UTC m=+44.691695803" watchObservedRunningTime="2025-05-13 23:43:58.14655232 +0000 UTC m=+46.315155231" May 13 23:43:58.294148 containerd[1940]: time="2025-05-13T23:43:58.294080477Z" level=info msg="StartContainer for \"bb5bffb55eab503723b6a370606907048da139b15ba9de626ca5a975d0533516\" returns successfully" May 13 23:43:58.483362 kubelet[3229]: I0513 23:43:58.482419 3229 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:44:01.019260 containerd[1940]: time="2025-05-13T23:44:01.018931506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:01.021345 containerd[1940]: time="2025-05-13T23:44:01.021213654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 13 23:44:01.024323 containerd[1940]: time="2025-05-13T23:44:01.022735050Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:01.027808 containerd[1940]: time="2025-05-13T23:44:01.027751374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 3.234726352s" May 13 23:44:01.028026 containerd[1940]: time="2025-05-13T23:44:01.027997314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 13 23:44:01.029307 containerd[1940]: time="2025-05-13T23:44:01.028562442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:01.031171 containerd[1940]: time="2025-05-13T23:44:01.031124886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 23:44:01.066477 containerd[1940]: time="2025-05-13T23:44:01.066395623Z" level=info msg="CreateContainer within sandbox \"656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 23:44:01.079264 containerd[1940]: time="2025-05-13T23:44:01.077542351Z" level=info msg="Container 0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:01.098485 containerd[1940]: time="2025-05-13T23:44:01.098415619Z" level=info msg="CreateContainer within sandbox \"656dc8c81c1de5f35eec99cb107f89d4061b42fd75124f07327676b4c83ef731\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4\"" May 13 23:44:01.100517 containerd[1940]: time="2025-05-13T23:44:01.100387159Z" level=info msg="StartContainer for \"0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4\"" May 13 23:44:01.103546 containerd[1940]: time="2025-05-13T23:44:01.103472311Z" level=info msg="connecting to shim 0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4" address="unix:///run/containerd/s/970ddb0c3bc5306fbd52ddab1dc22adbd514f583d1a0da6c984cc4e14ca55167" protocol=ttrpc version=3 May 13 23:44:01.141563 systemd[1]: Started cri-containerd-0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4.scope - libcontainer container 0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4. May 13 23:44:01.228204 containerd[1940]: time="2025-05-13T23:44:01.227686423Z" level=info msg="StartContainer for \"0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4\" returns successfully" May 13 23:44:01.611956 containerd[1940]: time="2025-05-13T23:44:01.611880561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4\" id:\"e61cd6aef0c2b3b78b57b4c0de752ed8e056905e8c3a4b8df8188ad724f22534\" pid:5266 exited_at:{seconds:1747179841 nanos:610155825}" May 13 23:44:01.643386 kubelet[3229]: I0513 23:44:01.642998 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-845cb5b5b9-57b2c" podStartSLOduration=28.08145996 podStartE2EDuration="35.642788877s" podCreationTimestamp="2025-05-13 23:43:26 +0000 UTC" firstStartedPulling="2025-05-13 23:43:53.468032881 +0000 UTC m=+41.636635792" lastFinishedPulling="2025-05-13 23:44:01.029361786 +0000 UTC m=+49.197964709" observedRunningTime="2025-05-13 23:44:01.535666161 +0000 UTC m=+49.704269132" watchObservedRunningTime="2025-05-13 23:44:01.642788877 +0000 UTC m=+49.811391980" May 13 23:44:02.657841 containerd[1940]: time="2025-05-13T23:44:02.657540934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:02.659731 containerd[1940]: time="2025-05-13T23:44:02.659636026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 23:44:02.659861 containerd[1940]: time="2025-05-13T23:44:02.659807374Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:02.665001 containerd[1940]: time="2025-05-13T23:44:02.664943542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:44:02.666812 containerd[1940]: time="2025-05-13T23:44:02.665873698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.633353116s" May 13 23:44:02.666812 containerd[1940]: time="2025-05-13T23:44:02.665931574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 23:44:02.673548 containerd[1940]: time="2025-05-13T23:44:02.673472483Z" level=info msg="CreateContainer within sandbox \"02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 23:44:02.708043 containerd[1940]: time="2025-05-13T23:44:02.707954831Z" level=info msg="Container ae851096ffad938d967a0b348dfe1b85c0c554b8040506b958a3d8e9fcf93483: CDI devices from CRI Config.CDIDevices: []" May 13 23:44:02.765971 containerd[1940]: time="2025-05-13T23:44:02.765897635Z" level=info msg="CreateContainer within sandbox \"02096f4ebc9b4137545a0a84746ea9983801f32d5cfabdcd40c3742e6c2b0a82\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ae851096ffad938d967a0b348dfe1b85c0c554b8040506b958a3d8e9fcf93483\"" May 13 23:44:02.770342 containerd[1940]: time="2025-05-13T23:44:02.770204003Z" level=info msg="StartContainer for \"ae851096ffad938d967a0b348dfe1b85c0c554b8040506b958a3d8e9fcf93483\"" May 13 23:44:02.783271 containerd[1940]: time="2025-05-13T23:44:02.783140987Z" level=info msg="connecting to shim ae851096ffad938d967a0b348dfe1b85c0c554b8040506b958a3d8e9fcf93483" address="unix:///run/containerd/s/b70ba6afc0789d5f648bbfdda9b3ff54d6a8bbda750e9d2507e575b18fef8c70" protocol=ttrpc version=3 May 13 23:44:02.864876 systemd[1]: Started cri-containerd-ae851096ffad938d967a0b348dfe1b85c0c554b8040506b958a3d8e9fcf93483.scope - libcontainer container ae851096ffad938d967a0b348dfe1b85c0c554b8040506b958a3d8e9fcf93483. May 13 23:44:03.046822 containerd[1940]: time="2025-05-13T23:44:03.044020988Z" level=info msg="StartContainer for \"ae851096ffad938d967a0b348dfe1b85c0c554b8040506b958a3d8e9fcf93483\" returns successfully" May 13 23:44:03.288746 kubelet[3229]: I0513 23:44:03.288688 3229 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 23:44:03.288746 kubelet[3229]: I0513 23:44:03.288744 3229 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 23:44:03.538051 kubelet[3229]: I0513 23:44:03.537872 3229 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-m6fkp" podStartSLOduration=27.527470981 podStartE2EDuration="37.537846191s" podCreationTimestamp="2025-05-13 23:43:26 +0000 UTC" firstStartedPulling="2025-05-13 23:43:52.659405269 +0000 UTC m=+40.828008168" lastFinishedPulling="2025-05-13 23:44:02.669780467 +0000 UTC m=+50.838383378" observedRunningTime="2025-05-13 23:44:03.536537339 +0000 UTC m=+51.705140262" watchObservedRunningTime="2025-05-13 23:44:03.537846191 +0000 UTC m=+51.706449102" May 13 23:44:03.705692 systemd[1]: Started sshd@7-172.31.17.246:22-139.178.89.65:51310.service - OpenSSH per-connection server daemon (139.178.89.65:51310). May 13 23:44:03.932789 sshd[5321]: Accepted publickey for core from 139.178.89.65 port 51310 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:03.936284 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:03.946492 systemd-logind[1930]: New session 8 of user core. May 13 23:44:03.955850 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:44:04.263941 sshd[5323]: Connection closed by 139.178.89.65 port 51310 May 13 23:44:04.264972 sshd-session[5321]: pam_unix(sshd:session): session closed for user core May 13 23:44:04.271628 systemd[1]: sshd@7-172.31.17.246:22-139.178.89.65:51310.service: Deactivated successfully. May 13 23:44:04.276182 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:44:04.279059 systemd-logind[1930]: Session 8 logged out. Waiting for processes to exit. May 13 23:44:04.280757 systemd-logind[1930]: Removed session 8. May 13 23:44:09.310589 systemd[1]: Started sshd@8-172.31.17.246:22-139.178.89.65:49382.service - OpenSSH per-connection server daemon (139.178.89.65:49382). May 13 23:44:09.529726 sshd[5342]: Accepted publickey for core from 139.178.89.65 port 49382 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:09.533029 sshd-session[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:09.542933 systemd-logind[1930]: New session 9 of user core. May 13 23:44:09.550859 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:44:09.876318 sshd[5344]: Connection closed by 139.178.89.65 port 49382 May 13 23:44:09.877455 sshd-session[5342]: pam_unix(sshd:session): session closed for user core May 13 23:44:09.887640 systemd[1]: sshd@8-172.31.17.246:22-139.178.89.65:49382.service: Deactivated successfully. May 13 23:44:09.893778 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:44:09.896397 systemd-logind[1930]: Session 9 logged out. Waiting for processes to exit. May 13 23:44:09.898368 systemd-logind[1930]: Removed session 9. May 13 23:44:14.915034 systemd[1]: Started sshd@9-172.31.17.246:22-139.178.89.65:49388.service - OpenSSH per-connection server daemon (139.178.89.65:49388). May 13 23:44:15.118510 sshd[5363]: Accepted publickey for core from 139.178.89.65 port 49388 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:15.120998 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:15.130696 systemd-logind[1930]: New session 10 of user core. May 13 23:44:15.139495 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:44:15.414625 sshd[5365]: Connection closed by 139.178.89.65 port 49388 May 13 23:44:15.415597 sshd-session[5363]: pam_unix(sshd:session): session closed for user core May 13 23:44:15.422844 systemd[1]: sshd@9-172.31.17.246:22-139.178.89.65:49388.service: Deactivated successfully. May 13 23:44:15.427318 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:44:15.430686 systemd-logind[1930]: Session 10 logged out. Waiting for processes to exit. May 13 23:44:15.434072 systemd-logind[1930]: Removed session 10. May 13 23:44:15.455481 systemd[1]: Started sshd@10-172.31.17.246:22-139.178.89.65:49390.service - OpenSSH per-connection server daemon (139.178.89.65:49390). May 13 23:44:15.652744 sshd[5378]: Accepted publickey for core from 139.178.89.65 port 49390 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:15.655376 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:15.663921 systemd-logind[1930]: New session 11 of user core. May 13 23:44:15.670612 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:44:16.027342 sshd[5380]: Connection closed by 139.178.89.65 port 49390 May 13 23:44:16.028704 sshd-session[5378]: pam_unix(sshd:session): session closed for user core May 13 23:44:16.043669 systemd[1]: sshd@10-172.31.17.246:22-139.178.89.65:49390.service: Deactivated successfully. May 13 23:44:16.053313 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:44:16.056495 systemd-logind[1930]: Session 11 logged out. Waiting for processes to exit. May 13 23:44:16.080337 systemd[1]: Started sshd@11-172.31.17.246:22-139.178.89.65:49404.service - OpenSSH per-connection server daemon (139.178.89.65:49404). May 13 23:44:16.083076 systemd-logind[1930]: Removed session 11. May 13 23:44:16.293515 sshd[5389]: Accepted publickey for core from 139.178.89.65 port 49404 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:16.296433 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:16.306587 systemd-logind[1930]: New session 12 of user core. May 13 23:44:16.314817 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:44:16.652315 sshd[5392]: Connection closed by 139.178.89.65 port 49404 May 13 23:44:16.653170 sshd-session[5389]: pam_unix(sshd:session): session closed for user core May 13 23:44:16.662380 systemd[1]: sshd@11-172.31.17.246:22-139.178.89.65:49404.service: Deactivated successfully. May 13 23:44:16.668746 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:44:16.674847 systemd-logind[1930]: Session 12 logged out. Waiting for processes to exit. May 13 23:44:16.678776 systemd-logind[1930]: Removed session 12. May 13 23:44:21.693515 systemd[1]: Started sshd@12-172.31.17.246:22-139.178.89.65:45622.service - OpenSSH per-connection server daemon (139.178.89.65:45622). May 13 23:44:21.892882 sshd[5410]: Accepted publickey for core from 139.178.89.65 port 45622 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:21.895653 sshd-session[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:21.903857 systemd-logind[1930]: New session 13 of user core. May 13 23:44:21.911484 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:44:22.171252 sshd[5412]: Connection closed by 139.178.89.65 port 45622 May 13 23:44:22.172299 sshd-session[5410]: pam_unix(sshd:session): session closed for user core May 13 23:44:22.178957 systemd[1]: sshd@12-172.31.17.246:22-139.178.89.65:45622.service: Deactivated successfully. May 13 23:44:22.184455 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:44:22.186283 systemd-logind[1930]: Session 13 logged out. Waiting for processes to exit. May 13 23:44:22.188324 systemd-logind[1930]: Removed session 13. May 13 23:44:22.357178 containerd[1940]: time="2025-05-13T23:44:22.357102892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4\" id:\"b1d9c0cfa03bba6ae308fdf92a4258764811cc389485868a94883c77f8575bcd\" pid:5434 exited_at:{seconds:1747179862 nanos:356068216}" May 13 23:44:25.335816 containerd[1940]: time="2025-05-13T23:44:25.335732911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4\" id:\"c046db9b1dd2f59125b152ff07b0de5a12ce3b750ccb41973d0cfa5846a57825\" pid:5457 exited_at:{seconds:1747179865 nanos:335026459}" May 13 23:44:26.286877 kubelet[3229]: I0513 23:44:26.286506 3229 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:44:27.211482 systemd[1]: Started sshd@13-172.31.17.246:22-139.178.89.65:42656.service - OpenSSH per-connection server daemon (139.178.89.65:42656). May 13 23:44:27.412440 sshd[5469]: Accepted publickey for core from 139.178.89.65 port 42656 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:27.416415 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:27.434656 systemd-logind[1930]: New session 14 of user core. May 13 23:44:27.440090 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:44:27.801289 containerd[1940]: time="2025-05-13T23:44:27.801174275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae\" id:\"f5e2169ecaa99b1e8dbb84d7561a43aab405c39e5e954462d039b901a896181f\" pid:5486 exited_at:{seconds:1747179867 nanos:800734775}" May 13 23:44:27.807320 sshd[5471]: Connection closed by 139.178.89.65 port 42656 May 13 23:44:27.807829 sshd-session[5469]: pam_unix(sshd:session): session closed for user core May 13 23:44:27.819932 systemd[1]: sshd@13-172.31.17.246:22-139.178.89.65:42656.service: Deactivated successfully. May 13 23:44:27.827681 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:44:27.832469 systemd-logind[1930]: Session 14 logged out. Waiting for processes to exit. May 13 23:44:27.835903 systemd-logind[1930]: Removed session 14. May 13 23:44:32.846526 systemd[1]: Started sshd@14-172.31.17.246:22-139.178.89.65:42668.service - OpenSSH per-connection server daemon (139.178.89.65:42668). May 13 23:44:33.052431 sshd[5513]: Accepted publickey for core from 139.178.89.65 port 42668 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:33.054947 sshd-session[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:33.068994 systemd-logind[1930]: New session 15 of user core. May 13 23:44:33.077046 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:44:33.393187 sshd[5515]: Connection closed by 139.178.89.65 port 42668 May 13 23:44:33.394014 sshd-session[5513]: pam_unix(sshd:session): session closed for user core May 13 23:44:33.403587 systemd[1]: sshd@14-172.31.17.246:22-139.178.89.65:42668.service: Deactivated successfully. May 13 23:44:33.409151 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:44:33.412470 systemd-logind[1930]: Session 15 logged out. Waiting for processes to exit. May 13 23:44:33.416050 systemd-logind[1930]: Removed session 15. May 13 23:44:38.432001 systemd[1]: Started sshd@15-172.31.17.246:22-139.178.89.65:40936.service - OpenSSH per-connection server daemon (139.178.89.65:40936). May 13 23:44:38.643421 sshd[5530]: Accepted publickey for core from 139.178.89.65 port 40936 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:38.649161 sshd-session[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:38.662366 systemd-logind[1930]: New session 16 of user core. May 13 23:44:38.670535 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:44:38.949804 sshd[5532]: Connection closed by 139.178.89.65 port 40936 May 13 23:44:38.951152 sshd-session[5530]: pam_unix(sshd:session): session closed for user core May 13 23:44:38.961180 systemd[1]: sshd@15-172.31.17.246:22-139.178.89.65:40936.service: Deactivated successfully. May 13 23:44:38.967917 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:44:38.970569 systemd-logind[1930]: Session 16 logged out. Waiting for processes to exit. May 13 23:44:38.989707 systemd[1]: Started sshd@16-172.31.17.246:22-139.178.89.65:40950.service - OpenSSH per-connection server daemon (139.178.89.65:40950). May 13 23:44:38.994356 systemd-logind[1930]: Removed session 16. May 13 23:44:39.197508 sshd[5543]: Accepted publickey for core from 139.178.89.65 port 40950 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:39.201218 sshd-session[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:39.210354 systemd-logind[1930]: New session 17 of user core. May 13 23:44:39.215551 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:44:39.766741 sshd[5546]: Connection closed by 139.178.89.65 port 40950 May 13 23:44:39.765845 sshd-session[5543]: pam_unix(sshd:session): session closed for user core May 13 23:44:39.778603 systemd[1]: sshd@16-172.31.17.246:22-139.178.89.65:40950.service: Deactivated successfully. May 13 23:44:39.779260 systemd-logind[1930]: Session 17 logged out. Waiting for processes to exit. May 13 23:44:39.790265 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:44:39.812829 systemd[1]: Started sshd@17-172.31.17.246:22-139.178.89.65:40954.service - OpenSSH per-connection server daemon (139.178.89.65:40954). May 13 23:44:39.816757 systemd-logind[1930]: Removed session 17. May 13 23:44:40.031088 sshd[5554]: Accepted publickey for core from 139.178.89.65 port 40954 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:40.033897 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:40.042677 systemd-logind[1930]: New session 18 of user core. May 13 23:44:40.051519 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:44:43.668918 sshd[5557]: Connection closed by 139.178.89.65 port 40954 May 13 23:44:43.671117 sshd-session[5554]: pam_unix(sshd:session): session closed for user core May 13 23:44:43.681221 systemd[1]: sshd@17-172.31.17.246:22-139.178.89.65:40954.service: Deactivated successfully. May 13 23:44:43.689074 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:44:43.690582 systemd[1]: session-18.scope: Consumed 1.021s CPU time, 68.4M memory peak. May 13 23:44:43.694765 systemd-logind[1930]: Session 18 logged out. Waiting for processes to exit. May 13 23:44:43.720478 systemd[1]: Started sshd@18-172.31.17.246:22-139.178.89.65:40958.service - OpenSSH per-connection server daemon (139.178.89.65:40958). May 13 23:44:43.726823 systemd-logind[1930]: Removed session 18. May 13 23:44:43.936121 sshd[5580]: Accepted publickey for core from 139.178.89.65 port 40958 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:43.938342 sshd-session[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:43.948157 systemd-logind[1930]: New session 19 of user core. May 13 23:44:43.956496 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:44:44.479141 sshd[5584]: Connection closed by 139.178.89.65 port 40958 May 13 23:44:44.479715 sshd-session[5580]: pam_unix(sshd:session): session closed for user core May 13 23:44:44.487509 systemd[1]: sshd@18-172.31.17.246:22-139.178.89.65:40958.service: Deactivated successfully. May 13 23:44:44.492387 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:44:44.495679 systemd-logind[1930]: Session 19 logged out. Waiting for processes to exit. May 13 23:44:44.498048 systemd-logind[1930]: Removed session 19. May 13 23:44:44.520888 systemd[1]: Started sshd@19-172.31.17.246:22-139.178.89.65:40966.service - OpenSSH per-connection server daemon (139.178.89.65:40966). May 13 23:44:44.716436 sshd[5594]: Accepted publickey for core from 139.178.89.65 port 40966 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:44.718952 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:44.727634 systemd-logind[1930]: New session 20 of user core. May 13 23:44:44.737519 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:44:44.983414 sshd[5596]: Connection closed by 139.178.89.65 port 40966 May 13 23:44:44.984729 sshd-session[5594]: pam_unix(sshd:session): session closed for user core May 13 23:44:44.992186 systemd[1]: sshd@19-172.31.17.246:22-139.178.89.65:40966.service: Deactivated successfully. May 13 23:44:44.996350 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:44:44.998868 systemd-logind[1930]: Session 20 logged out. Waiting for processes to exit. May 13 23:44:45.001053 systemd-logind[1930]: Removed session 20. May 13 23:44:50.023059 systemd[1]: Started sshd@20-172.31.17.246:22-139.178.89.65:58442.service - OpenSSH per-connection server daemon (139.178.89.65:58442). May 13 23:44:50.224905 sshd[5610]: Accepted publickey for core from 139.178.89.65 port 58442 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:50.227478 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:50.237396 systemd-logind[1930]: New session 21 of user core. May 13 23:44:50.241513 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:44:50.517550 sshd[5612]: Connection closed by 139.178.89.65 port 58442 May 13 23:44:50.519602 sshd-session[5610]: pam_unix(sshd:session): session closed for user core May 13 23:44:50.526601 systemd[1]: sshd@20-172.31.17.246:22-139.178.89.65:58442.service: Deactivated successfully. May 13 23:44:50.531363 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:44:50.533122 systemd-logind[1930]: Session 21 logged out. Waiting for processes to exit. May 13 23:44:50.535184 systemd-logind[1930]: Removed session 21. May 13 23:44:52.317606 containerd[1940]: time="2025-05-13T23:44:52.317543817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4\" id:\"5c0bed609988e28a6a5bb3a469c1889d32a9ff9a674d59dd403489c67ba19b43\" pid:5639 exited_at:{seconds:1747179892 nanos:317158497}" May 13 23:44:55.557683 systemd[1]: Started sshd@21-172.31.17.246:22-139.178.89.65:58454.service - OpenSSH per-connection server daemon (139.178.89.65:58454). May 13 23:44:55.765073 sshd[5649]: Accepted publickey for core from 139.178.89.65 port 58454 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:44:55.767668 sshd-session[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:44:55.775654 systemd-logind[1930]: New session 22 of user core. May 13 23:44:55.780509 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:44:56.027170 sshd[5651]: Connection closed by 139.178.89.65 port 58454 May 13 23:44:56.028139 sshd-session[5649]: pam_unix(sshd:session): session closed for user core May 13 23:44:56.039018 systemd[1]: sshd@21-172.31.17.246:22-139.178.89.65:58454.service: Deactivated successfully. May 13 23:44:56.047478 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:44:56.053808 systemd-logind[1930]: Session 22 logged out. Waiting for processes to exit. May 13 23:44:56.058396 systemd-logind[1930]: Removed session 22. May 13 23:44:57.569955 containerd[1940]: time="2025-05-13T23:44:57.569883843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae\" id:\"fb660cb37a32653401a7e630218a1146b68d02e24a8f56e3f825d9029224c675\" pid:5675 exited_at:{seconds:1747179897 nanos:569047071}" May 13 23:45:01.070318 systemd[1]: Started sshd@22-172.31.17.246:22-139.178.89.65:55246.service - OpenSSH per-connection server daemon (139.178.89.65:55246). May 13 23:45:01.274726 sshd[5688]: Accepted publickey for core from 139.178.89.65 port 55246 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:45:01.278060 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:01.286458 systemd-logind[1930]: New session 23 of user core. May 13 23:45:01.294518 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:45:01.545421 sshd[5690]: Connection closed by 139.178.89.65 port 55246 May 13 23:45:01.545182 sshd-session[5688]: pam_unix(sshd:session): session closed for user core May 13 23:45:01.551897 systemd[1]: sshd@22-172.31.17.246:22-139.178.89.65:55246.service: Deactivated successfully. May 13 23:45:01.555321 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:45:01.557179 systemd-logind[1930]: Session 23 logged out. Waiting for processes to exit. May 13 23:45:01.559724 systemd-logind[1930]: Removed session 23. May 13 23:45:06.586312 systemd[1]: Started sshd@23-172.31.17.246:22-139.178.89.65:59464.service - OpenSSH per-connection server daemon (139.178.89.65:59464). May 13 23:45:06.784194 sshd[5702]: Accepted publickey for core from 139.178.89.65 port 59464 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:45:06.786649 sshd-session[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:06.795773 systemd-logind[1930]: New session 24 of user core. May 13 23:45:06.802505 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 23:45:07.043006 sshd[5704]: Connection closed by 139.178.89.65 port 59464 May 13 23:45:07.043916 sshd-session[5702]: pam_unix(sshd:session): session closed for user core May 13 23:45:07.050869 systemd[1]: sshd@23-172.31.17.246:22-139.178.89.65:59464.service: Deactivated successfully. May 13 23:45:07.056178 systemd[1]: session-24.scope: Deactivated successfully. May 13 23:45:07.057890 systemd-logind[1930]: Session 24 logged out. Waiting for processes to exit. May 13 23:45:07.060335 systemd-logind[1930]: Removed session 24. May 13 23:45:12.079438 systemd[1]: Started sshd@24-172.31.17.246:22-139.178.89.65:59476.service - OpenSSH per-connection server daemon (139.178.89.65:59476). May 13 23:45:12.272708 sshd[5723]: Accepted publickey for core from 139.178.89.65 port 59476 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:45:12.275317 sshd-session[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:12.284373 systemd-logind[1930]: New session 25 of user core. May 13 23:45:12.291505 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 23:45:12.528269 sshd[5725]: Connection closed by 139.178.89.65 port 59476 May 13 23:45:12.529071 sshd-session[5723]: pam_unix(sshd:session): session closed for user core May 13 23:45:12.537194 systemd[1]: sshd@24-172.31.17.246:22-139.178.89.65:59476.service: Deactivated successfully. May 13 23:45:12.541132 systemd[1]: session-25.scope: Deactivated successfully. May 13 23:45:12.542828 systemd-logind[1930]: Session 25 logged out. Waiting for processes to exit. May 13 23:45:12.545977 systemd-logind[1930]: Removed session 25. May 13 23:45:17.568336 systemd[1]: Started sshd@25-172.31.17.246:22-139.178.89.65:45540.service - OpenSSH per-connection server daemon (139.178.89.65:45540). May 13 23:45:17.776964 sshd[5739]: Accepted publickey for core from 139.178.89.65 port 45540 ssh2: RSA SHA256:xZnWmgdZViEQ6G9tYEzm7AmOSvmGFHk0KXIslK9RHvo May 13 23:45:17.779482 sshd-session[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:45:17.788611 systemd-logind[1930]: New session 26 of user core. May 13 23:45:17.794523 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 23:45:18.060931 sshd[5741]: Connection closed by 139.178.89.65 port 45540 May 13 23:45:18.061805 sshd-session[5739]: pam_unix(sshd:session): session closed for user core May 13 23:45:18.067069 systemd-logind[1930]: Session 26 logged out. Waiting for processes to exit. May 13 23:45:18.068049 systemd[1]: sshd@25-172.31.17.246:22-139.178.89.65:45540.service: Deactivated successfully. May 13 23:45:18.072872 systemd[1]: session-26.scope: Deactivated successfully. May 13 23:45:18.078103 systemd-logind[1930]: Removed session 26. May 13 23:45:22.325808 containerd[1940]: time="2025-05-13T23:45:22.325722314Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4\" id:\"3356f73eacde62531adbf89831cb90cec0e8afb7791d38222e8bc451d9c5b0bc\" pid:5779 exited_at:{seconds:1747179922 nanos:325053146}" May 13 23:45:25.334517 containerd[1940]: time="2025-05-13T23:45:25.334432085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2210fa3833d972e2f7ecb0f731cc648ed2d6e37314edd4747aad32b0547aa4\" id:\"d619755918b686b222bcc6208ef1b2f20407c53e48ae9b7e7e6a5927b4e27c40\" pid:5807 exited_at:{seconds:1747179925 nanos:333790265}" May 13 23:45:27.554411 containerd[1940]: time="2025-05-13T23:45:27.554123492Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6547c135dd792925454b6ced1e389cf6f181c968ab75c0287041afc507b93bae\" id:\"6dc0fc1d96f2b07bb9bf6c4489f3bd3320dc373941102f7891b9241943e6c081\" pid:5828 exited_at:{seconds:1747179927 nanos:553710500}" May 13 23:45:31.416178 systemd[1]: cri-containerd-5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89.scope: Deactivated successfully. May 13 23:45:31.418050 systemd[1]: cri-containerd-5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89.scope: Consumed 4.934s CPU time, 55.6M memory peak, 128K read from disk. May 13 23:45:31.424928 containerd[1940]: time="2025-05-13T23:45:31.424824839Z" level=info msg="received exit event container_id:\"5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89\" id:\"5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89\" pid:3067 exit_status:1 exited_at:{seconds:1747179931 nanos:424068743}" May 13 23:45:31.427727 containerd[1940]: time="2025-05-13T23:45:31.426268763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89\" id:\"5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89\" pid:3067 exit_status:1 exited_at:{seconds:1747179931 nanos:424068743}" May 13 23:45:31.469357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89-rootfs.mount: Deactivated successfully. May 13 23:45:31.575181 systemd[1]: cri-containerd-f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d.scope: Deactivated successfully. May 13 23:45:31.578362 systemd[1]: cri-containerd-f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d.scope: Consumed 9.045s CPU time, 44.3M memory peak, 168K read from disk. May 13 23:45:31.581317 containerd[1940]: time="2025-05-13T23:45:31.581015976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d\" id:\"f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d\" pid:3579 exit_status:1 exited_at:{seconds:1747179931 nanos:580392996}" May 13 23:45:31.581317 containerd[1940]: time="2025-05-13T23:45:31.581178480Z" level=info msg="received exit event container_id:\"f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d\" id:\"f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d\" pid:3579 exit_status:1 exited_at:{seconds:1747179931 nanos:580392996}" May 13 23:45:31.622997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d-rootfs.mount: Deactivated successfully. May 13 23:45:31.784052 kubelet[3229]: I0513 23:45:31.782766 3229 scope.go:117] "RemoveContainer" containerID="f07bd3439a7066bf8f97c2c00d0c71fcadff5477e54aaec5170a26e327252c0d" May 13 23:45:31.788672 containerd[1940]: time="2025-05-13T23:45:31.788575717Z" level=info msg="CreateContainer within sandbox \"f26e0a8667c265e63d6e1af373330cf16b0e164875945d958127af3fe4e5a45d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 13 23:45:31.790164 kubelet[3229]: I0513 23:45:31.789357 3229 scope.go:117] "RemoveContainer" containerID="5b76665be31bf27d9d80e3022ba9754f5fdff3481422e8aa5ed676e619864b89" May 13 23:45:31.794296 containerd[1940]: time="2025-05-13T23:45:31.794016277Z" level=info msg="CreateContainer within sandbox \"708a7d7070379cd0eab1326421406f5dd6819c383d24d46df1f5664bbd503fc2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 13 23:45:31.810581 containerd[1940]: time="2025-05-13T23:45:31.810375517Z" level=info msg="Container a3e755074c56ab5c3135c3d4f9519e4c6b421c334ea7fcc78f98e11843f4a3cf: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:31.823635 containerd[1940]: time="2025-05-13T23:45:31.823450237Z" level=info msg="Container 7c247489849b972d07f3c24dfca917b89a149b7fdca1ea0b653b1608f1b09883: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:31.834302 containerd[1940]: time="2025-05-13T23:45:31.834024169Z" level=info msg="CreateContainer within sandbox \"f26e0a8667c265e63d6e1af373330cf16b0e164875945d958127af3fe4e5a45d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a3e755074c56ab5c3135c3d4f9519e4c6b421c334ea7fcc78f98e11843f4a3cf\"" May 13 23:45:31.835058 containerd[1940]: time="2025-05-13T23:45:31.834866857Z" level=info msg="StartContainer for \"a3e755074c56ab5c3135c3d4f9519e4c6b421c334ea7fcc78f98e11843f4a3cf\"" May 13 23:45:31.836832 containerd[1940]: time="2025-05-13T23:45:31.836774701Z" level=info msg="connecting to shim a3e755074c56ab5c3135c3d4f9519e4c6b421c334ea7fcc78f98e11843f4a3cf" address="unix:///run/containerd/s/11e8d71fcfcf06fb4a1aaa1804dd6c782cac4391e4d1eb90ed204b1b58c6002b" protocol=ttrpc version=3 May 13 23:45:31.846772 containerd[1940]: time="2025-05-13T23:45:31.846702253Z" level=info msg="CreateContainer within sandbox \"708a7d7070379cd0eab1326421406f5dd6819c383d24d46df1f5664bbd503fc2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7c247489849b972d07f3c24dfca917b89a149b7fdca1ea0b653b1608f1b09883\"" May 13 23:45:31.848156 containerd[1940]: time="2025-05-13T23:45:31.847984981Z" level=info msg="StartContainer for \"7c247489849b972d07f3c24dfca917b89a149b7fdca1ea0b653b1608f1b09883\"" May 13 23:45:31.851912 containerd[1940]: time="2025-05-13T23:45:31.851771053Z" level=info msg="connecting to shim 7c247489849b972d07f3c24dfca917b89a149b7fdca1ea0b653b1608f1b09883" address="unix:///run/containerd/s/af3a7e94d2b6137f615dd0b935c850450efd2e23126250e27e14f5416657f05b" protocol=ttrpc version=3 May 13 23:45:31.876571 systemd[1]: Started cri-containerd-a3e755074c56ab5c3135c3d4f9519e4c6b421c334ea7fcc78f98e11843f4a3cf.scope - libcontainer container a3e755074c56ab5c3135c3d4f9519e4c6b421c334ea7fcc78f98e11843f4a3cf. May 13 23:45:31.898508 systemd[1]: Started cri-containerd-7c247489849b972d07f3c24dfca917b89a149b7fdca1ea0b653b1608f1b09883.scope - libcontainer container 7c247489849b972d07f3c24dfca917b89a149b7fdca1ea0b653b1608f1b09883. May 13 23:45:31.986426 containerd[1940]: time="2025-05-13T23:45:31.985309502Z" level=info msg="StartContainer for \"a3e755074c56ab5c3135c3d4f9519e4c6b421c334ea7fcc78f98e11843f4a3cf\" returns successfully" May 13 23:45:32.057084 containerd[1940]: time="2025-05-13T23:45:32.056938666Z" level=info msg="StartContainer for \"7c247489849b972d07f3c24dfca917b89a149b7fdca1ea0b653b1608f1b09883\" returns successfully" May 13 23:45:32.479800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196060693.mount: Deactivated successfully. May 13 23:45:34.658758 kubelet[3229]: E0513 23:45:34.658584 3229 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-17-246)" May 13 23:45:37.286510 systemd[1]: cri-containerd-5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1.scope: Deactivated successfully. May 13 23:45:37.287056 systemd[1]: cri-containerd-5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1.scope: Consumed 3.037s CPU time, 18.9M memory peak, 128K read from disk. May 13 23:45:37.292589 containerd[1940]: time="2025-05-13T23:45:37.292538440Z" level=info msg="received exit event container_id:\"5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1\" id:\"5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1\" pid:3075 exit_status:1 exited_at:{seconds:1747179937 nanos:291909292}" May 13 23:45:37.293604 containerd[1940]: time="2025-05-13T23:45:37.293104901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1\" id:\"5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1\" pid:3075 exit_status:1 exited_at:{seconds:1747179937 nanos:291909292}" May 13 23:45:37.336997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1-rootfs.mount: Deactivated successfully. May 13 23:45:37.821752 kubelet[3229]: I0513 23:45:37.821715 3229 scope.go:117] "RemoveContainer" containerID="5bd0ac087df80f1ee086d3cf074285562567346d218549957b28910764662ce1" May 13 23:45:37.825691 containerd[1940]: time="2025-05-13T23:45:37.825638431Z" level=info msg="CreateContainer within sandbox \"c995bbec20f3b7f3aa171704449013b05541d914cdfbe638d4499ac17580e9ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 13 23:45:37.843529 containerd[1940]: time="2025-05-13T23:45:37.843460891Z" level=info msg="Container 4ff16b28827713ee1c6691f1f5c4e6e02dac6299060a6ba8a2f5ea09acb09ec2: CDI devices from CRI Config.CDIDevices: []" May 13 23:45:37.860980 containerd[1940]: time="2025-05-13T23:45:37.860907679Z" level=info msg="CreateContainer within sandbox \"c995bbec20f3b7f3aa171704449013b05541d914cdfbe638d4499ac17580e9ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4ff16b28827713ee1c6691f1f5c4e6e02dac6299060a6ba8a2f5ea09acb09ec2\"" May 13 23:45:37.862065 containerd[1940]: time="2025-05-13T23:45:37.861990991Z" level=info msg="StartContainer for \"4ff16b28827713ee1c6691f1f5c4e6e02dac6299060a6ba8a2f5ea09acb09ec2\"" May 13 23:45:37.864189 containerd[1940]: time="2025-05-13T23:45:37.864122815Z" level=info msg="connecting to shim 4ff16b28827713ee1c6691f1f5c4e6e02dac6299060a6ba8a2f5ea09acb09ec2" address="unix:///run/containerd/s/24a408e62561b42b1a26a8e1df5323f8c673fe1e31eb193f55600c74852cb531" protocol=ttrpc version=3 May 13 23:45:37.907571 systemd[1]: Started cri-containerd-4ff16b28827713ee1c6691f1f5c4e6e02dac6299060a6ba8a2f5ea09acb09ec2.scope - libcontainer container 4ff16b28827713ee1c6691f1f5c4e6e02dac6299060a6ba8a2f5ea09acb09ec2. May 13 23:45:37.990000 containerd[1940]: time="2025-05-13T23:45:37.989901512Z" level=info msg="StartContainer for \"4ff16b28827713ee1c6691f1f5c4e6e02dac6299060a6ba8a2f5ea09acb09ec2\" returns successfully"