May 9 23:59:29.194563 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 9 23:59:29.194610 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 9 23:59:29.194635 kernel: KASLR disabled due to lack of seed May 9 23:59:29.194668 kernel: efi: EFI v2.7 by EDK II May 9 23:59:29.197028 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b000a98 MEMRESERVE=0x7852ee18 May 9 23:59:29.197047 kernel: ACPI: Early table checksum verification disabled May 9 23:59:29.197065 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 9 23:59:29.197081 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 9 23:59:29.197097 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 9 23:59:29.197115 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 9 23:59:29.197142 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 9 23:59:29.197158 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 9 23:59:29.197174 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 9 23:59:29.197190 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 9 23:59:29.197209 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 9 23:59:29.197230 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 9 23:59:29.197247 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 9 23:59:29.197264 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 9 23:59:29.197281 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 9 23:59:29.197297 kernel: printk: bootconsole [uart0] enabled May 9 23:59:29.197314 kernel: NUMA: Failed to initialise from firmware May 9 23:59:29.197331 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 9 23:59:29.197348 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 9 23:59:29.197365 kernel: Zone ranges: May 9 23:59:29.197381 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 9 23:59:29.197398 kernel: DMA32 empty May 9 23:59:29.197419 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 9 23:59:29.197436 kernel: Movable zone start for each node May 9 23:59:29.197453 kernel: Early memory node ranges May 9 23:59:29.197469 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 9 23:59:29.197486 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 9 23:59:29.197502 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 9 23:59:29.197518 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 9 23:59:29.197535 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 9 23:59:29.197551 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 9 23:59:29.197568 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 9 23:59:29.197585 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 9 23:59:29.197601 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 9 23:59:29.197622 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 9 23:59:29.197640 kernel: psci: probing for conduit method from ACPI. May 9 23:59:29.197711 kernel: psci: PSCIv1.0 detected in firmware. May 9 23:59:29.197732 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:59:29.197750 kernel: psci: Trusted OS migration not required May 9 23:59:29.197772 kernel: psci: SMC Calling Convention v1.1 May 9 23:59:29.197791 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:59:29.197808 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:59:29.197826 kernel: pcpu-alloc: [0] 0 [0] 1 May 9 23:59:29.197843 kernel: Detected PIPT I-cache on CPU0 May 9 23:59:29.197861 kernel: CPU features: detected: GIC system register CPU interface May 9 23:59:29.197879 kernel: CPU features: detected: Spectre-v2 May 9 23:59:29.197897 kernel: CPU features: detected: Spectre-v3a May 9 23:59:29.197915 kernel: CPU features: detected: Spectre-BHB May 9 23:59:29.197933 kernel: CPU features: detected: ARM erratum 1742098 May 9 23:59:29.197952 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 9 23:59:29.197976 kernel: alternatives: applying boot alternatives May 9 23:59:29.197998 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:59:29.198018 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:59:29.198037 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:59:29.198055 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:59:29.198072 kernel: Fallback order for Node 0: 0 May 9 23:59:29.198090 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 9 23:59:29.198108 kernel: Policy zone: Normal May 9 23:59:29.198125 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:59:29.198142 kernel: software IO TLB: area num 2. May 9 23:59:29.198160 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 9 23:59:29.198187 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) May 9 23:59:29.198205 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 23:59:29.198223 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:59:29.198241 kernel: rcu: RCU event tracing is enabled. May 9 23:59:29.198274 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 23:59:29.198298 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:59:29.198316 kernel: Tracing variant of Tasks RCU enabled. May 9 23:59:29.198334 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:59:29.198351 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 23:59:29.198368 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:59:29.198386 kernel: GICv3: 96 SPIs implemented May 9 23:59:29.198409 kernel: GICv3: 0 Extended SPIs implemented May 9 23:59:29.198426 kernel: Root IRQ handler: gic_handle_irq May 9 23:59:29.198444 kernel: GICv3: GICv3 features: 16 PPIs May 9 23:59:29.198461 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 9 23:59:29.198478 kernel: ITS [mem 0x10080000-0x1009ffff] May 9 23:59:29.198496 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 9 23:59:29.198513 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 9 23:59:29.198531 kernel: GICv3: using LPI property table @0x00000004000d0000 May 9 23:59:29.198548 kernel: ITS: Using hypervisor restricted LPI range [128] May 9 23:59:29.198566 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 9 23:59:29.198583 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:59:29.198601 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 9 23:59:29.198623 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 9 23:59:29.198641 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 9 23:59:29.198690 kernel: Console: colour dummy device 80x25 May 9 23:59:29.198711 kernel: printk: console [tty1] enabled May 9 23:59:29.198729 kernel: ACPI: Core revision 20230628 May 9 23:59:29.198748 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 9 23:59:29.198766 kernel: pid_max: default: 32768 minimum: 301 May 9 23:59:29.198784 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:59:29.198802 kernel: landlock: Up and running. May 9 23:59:29.198826 kernel: SELinux: Initializing. May 9 23:59:29.198844 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:59:29.198862 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:59:29.198880 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:59:29.198898 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:59:29.198916 kernel: rcu: Hierarchical SRCU implementation. May 9 23:59:29.198934 kernel: rcu: Max phase no-delay instances is 400. May 9 23:59:29.198952 kernel: Platform MSI: ITS@0x10080000 domain created May 9 23:59:29.198970 kernel: PCI/MSI: ITS@0x10080000 domain created May 9 23:59:29.198992 kernel: Remapping and enabling EFI services. May 9 23:59:29.199010 kernel: smp: Bringing up secondary CPUs ... May 9 23:59:29.199028 kernel: Detected PIPT I-cache on CPU1 May 9 23:59:29.199046 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 9 23:59:29.199064 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 9 23:59:29.199082 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 9 23:59:29.199100 kernel: smp: Brought up 1 node, 2 CPUs May 9 23:59:29.199131 kernel: SMP: Total of 2 processors activated. May 9 23:59:29.199150 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:59:29.199174 kernel: CPU features: detected: 32-bit EL1 Support May 9 23:59:29.199194 kernel: CPU features: detected: CRC32 instructions May 9 23:59:29.199212 kernel: CPU: All CPU(s) started at EL1 May 9 23:59:29.199248 kernel: alternatives: applying system-wide alternatives May 9 23:59:29.199271 kernel: devtmpfs: initialized May 9 23:59:29.199291 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:59:29.199312 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 23:59:29.199331 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:59:29.199350 kernel: SMBIOS 3.0.0 present. May 9 23:59:29.199370 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 9 23:59:29.199394 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:59:29.199413 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:59:29.199432 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:59:29.199452 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:59:29.199471 kernel: audit: initializing netlink subsys (disabled) May 9 23:59:29.199490 kernel: audit: type=2000 audit(0.294:1): state=initialized audit_enabled=0 res=1 May 9 23:59:29.199509 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:59:29.199534 kernel: cpuidle: using governor menu May 9 23:59:29.199553 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:59:29.199572 kernel: ASID allocator initialised with 65536 entries May 9 23:59:29.199591 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:59:29.199610 kernel: Serial: AMBA PL011 UART driver May 9 23:59:29.199628 kernel: Modules: 17488 pages in range for non-PLT usage May 9 23:59:29.199647 kernel: Modules: 509008 pages in range for PLT usage May 9 23:59:29.201753 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:59:29.201774 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:59:29.201804 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:59:29.201823 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:59:29.201842 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:59:29.201861 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:59:29.201879 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:59:29.201898 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:59:29.201916 kernel: ACPI: Added _OSI(Module Device) May 9 23:59:29.201935 kernel: ACPI: Added _OSI(Processor Device) May 9 23:59:29.201953 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:59:29.201986 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:59:29.202013 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:59:29.202032 kernel: ACPI: Interpreter enabled May 9 23:59:29.202050 kernel: ACPI: Using GIC for interrupt routing May 9 23:59:29.202069 kernel: ACPI: MCFG table detected, 1 entries May 9 23:59:29.202087 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 9 23:59:29.202428 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 23:59:29.202644 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 23:59:29.202880 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 23:59:29.203087 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 9 23:59:29.203361 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 9 23:59:29.203395 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 9 23:59:29.203416 kernel: acpiphp: Slot [1] registered May 9 23:59:29.203435 kernel: acpiphp: Slot [2] registered May 9 23:59:29.203454 kernel: acpiphp: Slot [3] registered May 9 23:59:29.203472 kernel: acpiphp: Slot [4] registered May 9 23:59:29.203499 kernel: acpiphp: Slot [5] registered May 9 23:59:29.203518 kernel: acpiphp: Slot [6] registered May 9 23:59:29.203537 kernel: acpiphp: Slot [7] registered May 9 23:59:29.203555 kernel: acpiphp: Slot [8] registered May 9 23:59:29.203573 kernel: acpiphp: Slot [9] registered May 9 23:59:29.203592 kernel: acpiphp: Slot [10] registered May 9 23:59:29.203610 kernel: acpiphp: Slot [11] registered May 9 23:59:29.203628 kernel: acpiphp: Slot [12] registered May 9 23:59:29.203647 kernel: acpiphp: Slot [13] registered May 9 23:59:29.206768 kernel: acpiphp: Slot [14] registered May 9 23:59:29.206789 kernel: acpiphp: Slot [15] registered May 9 23:59:29.206808 kernel: acpiphp: Slot [16] registered May 9 23:59:29.206827 kernel: acpiphp: Slot [17] registered May 9 23:59:29.206845 kernel: acpiphp: Slot [18] registered May 9 23:59:29.206864 kernel: acpiphp: Slot [19] registered May 9 23:59:29.206882 kernel: acpiphp: Slot [20] registered May 9 23:59:29.206901 kernel: acpiphp: Slot [21] registered May 9 23:59:29.206919 kernel: acpiphp: Slot [22] registered May 9 23:59:29.206938 kernel: acpiphp: Slot [23] registered May 9 23:59:29.206961 kernel: acpiphp: Slot [24] registered May 9 23:59:29.206979 kernel: acpiphp: Slot [25] registered May 9 23:59:29.206998 kernel: acpiphp: Slot [26] registered May 9 23:59:29.207016 kernel: acpiphp: Slot [27] registered May 9 23:59:29.207034 kernel: acpiphp: Slot [28] registered May 9 23:59:29.207053 kernel: acpiphp: Slot [29] registered May 9 23:59:29.207071 kernel: acpiphp: Slot [30] registered May 9 23:59:29.207089 kernel: acpiphp: Slot [31] registered May 9 23:59:29.207107 kernel: PCI host bridge to bus 0000:00 May 9 23:59:29.207361 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 9 23:59:29.207545 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 23:59:29.208814 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 9 23:59:29.209018 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 9 23:59:29.209253 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 9 23:59:29.209554 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 9 23:59:29.210543 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 9 23:59:29.210846 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 9 23:59:29.211054 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 9 23:59:29.211302 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 9 23:59:29.213003 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 9 23:59:29.213233 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 9 23:59:29.213443 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 9 23:59:29.213708 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 9 23:59:29.213920 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 9 23:59:29.214123 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 9 23:59:29.214349 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 9 23:59:29.214560 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 9 23:59:29.214874 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 9 23:59:29.215098 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 9 23:59:29.215297 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 9 23:59:29.215504 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 23:59:29.215718 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 9 23:59:29.215753 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 23:59:29.215773 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 23:59:29.215792 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 23:59:29.215811 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 23:59:29.215830 kernel: iommu: Default domain type: Translated May 9 23:59:29.215855 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:59:29.215874 kernel: efivars: Registered efivars operations May 9 23:59:29.215893 kernel: vgaarb: loaded May 9 23:59:29.215911 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:59:29.215930 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:59:29.215948 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:59:29.215967 kernel: pnp: PnP ACPI init May 9 23:59:29.216186 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 9 23:59:29.216213 kernel: pnp: PnP ACPI: found 1 devices May 9 23:59:29.216238 kernel: NET: Registered PF_INET protocol family May 9 23:59:29.216258 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:59:29.216277 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:59:29.216296 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:59:29.216315 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:59:29.216334 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:59:29.216353 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:59:29.216372 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:59:29.216391 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:59:29.216414 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:59:29.216432 kernel: PCI: CLS 0 bytes, default 64 May 9 23:59:29.216451 kernel: kvm [1]: HYP mode not available May 9 23:59:29.216484 kernel: Initialise system trusted keyrings May 9 23:59:29.216504 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:59:29.216523 kernel: Key type asymmetric registered May 9 23:59:29.216541 kernel: Asymmetric key parser 'x509' registered May 9 23:59:29.216560 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:59:29.216578 kernel: io scheduler mq-deadline registered May 9 23:59:29.216603 kernel: io scheduler kyber registered May 9 23:59:29.216621 kernel: io scheduler bfq registered May 9 23:59:29.216869 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 9 23:59:29.216897 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 23:59:29.216917 kernel: ACPI: button: Power Button [PWRB] May 9 23:59:29.216936 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 9 23:59:29.216954 kernel: ACPI: button: Sleep Button [SLPB] May 9 23:59:29.216972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:59:29.216998 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 9 23:59:29.217203 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 9 23:59:29.217228 kernel: printk: console [ttyS0] disabled May 9 23:59:29.217247 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 9 23:59:29.217266 kernel: printk: console [ttyS0] enabled May 9 23:59:29.217285 kernel: printk: bootconsole [uart0] disabled May 9 23:59:29.217303 kernel: thunder_xcv, ver 1.0 May 9 23:59:29.217321 kernel: thunder_bgx, ver 1.0 May 9 23:59:29.217340 kernel: nicpf, ver 1.0 May 9 23:59:29.217363 kernel: nicvf, ver 1.0 May 9 23:59:29.217589 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:59:29.220959 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:59:28 UTC (1746835168) May 9 23:59:29.221007 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:59:29.221028 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 9 23:59:29.221047 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:59:29.221066 kernel: watchdog: Hard watchdog permanently disabled May 9 23:59:29.221085 kernel: NET: Registered PF_INET6 protocol family May 9 23:59:29.221114 kernel: Segment Routing with IPv6 May 9 23:59:29.221132 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:59:29.221151 kernel: NET: Registered PF_PACKET protocol family May 9 23:59:29.221169 kernel: Key type dns_resolver registered May 9 23:59:29.221188 kernel: registered taskstats version 1 May 9 23:59:29.221206 kernel: Loading compiled-in X.509 certificates May 9 23:59:29.221225 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 9 23:59:29.221243 kernel: Key type .fscrypt registered May 9 23:59:29.221261 kernel: Key type fscrypt-provisioning registered May 9 23:59:29.221284 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:59:29.221303 kernel: ima: Allocated hash algorithm: sha1 May 9 23:59:29.221321 kernel: ima: No architecture policies found May 9 23:59:29.221339 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:59:29.221358 kernel: clk: Disabling unused clocks May 9 23:59:29.221376 kernel: Freeing unused kernel memory: 39424K May 9 23:59:29.221395 kernel: Run /init as init process May 9 23:59:29.221413 kernel: with arguments: May 9 23:59:29.221431 kernel: /init May 9 23:59:29.221450 kernel: with environment: May 9 23:59:29.221473 kernel: HOME=/ May 9 23:59:29.221491 kernel: TERM=linux May 9 23:59:29.221509 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:59:29.221532 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:59:29.221555 systemd[1]: Detected virtualization amazon. May 9 23:59:29.221576 systemd[1]: Detected architecture arm64. May 9 23:59:29.221596 systemd[1]: Running in initrd. May 9 23:59:29.221620 systemd[1]: No hostname configured, using default hostname. May 9 23:59:29.221640 systemd[1]: Hostname set to . May 9 23:59:29.221698 systemd[1]: Initializing machine ID from VM UUID. May 9 23:59:29.221721 systemd[1]: Queued start job for default target initrd.target. May 9 23:59:29.221742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:29.221763 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:29.221784 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:59:29.221805 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:59:29.221832 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:59:29.221853 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:59:29.221876 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:59:29.221897 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:59:29.221917 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:29.221938 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:59:29.221958 systemd[1]: Reached target paths.target - Path Units. May 9 23:59:29.221983 systemd[1]: Reached target slices.target - Slice Units. May 9 23:59:29.222003 systemd[1]: Reached target swap.target - Swaps. May 9 23:59:29.222023 systemd[1]: Reached target timers.target - Timer Units. May 9 23:59:29.222043 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:59:29.222064 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:59:29.222084 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:59:29.222104 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:59:29.222125 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:29.222149 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:59:29.222170 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:29.222190 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:59:29.222210 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:59:29.222231 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:59:29.222251 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:59:29.222290 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:59:29.222312 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:59:29.222333 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:59:29.222359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:29.222379 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:59:29.222400 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:29.222420 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:59:29.222478 systemd-journald[251]: Collecting audit messages is disabled. May 9 23:59:29.222527 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:59:29.222548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:29.222569 systemd-journald[251]: Journal started May 9 23:59:29.222610 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2eb71b5a58c466ee086101750b3e08) is 8.0M, max 75.3M, 67.3M free. May 9 23:59:29.207849 systemd-modules-load[252]: Inserted module 'overlay' May 9 23:59:29.232774 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:59:29.241940 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:59:29.242026 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:29.250761 systemd-modules-load[252]: Inserted module 'br_netfilter' May 9 23:59:29.252524 kernel: Bridge firewalling registered May 9 23:59:29.253456 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:59:29.259608 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:59:29.262609 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:59:29.275902 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:29.283953 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:59:29.311399 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:29.330010 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:29.342935 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:59:29.347416 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:29.355723 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:29.371950 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:59:29.394532 dracut-cmdline[283]: dracut-dracut-053 May 9 23:59:29.400297 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:59:29.463353 systemd-resolved[288]: Positive Trust Anchors: May 9 23:59:29.463387 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:59:29.463450 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:59:29.533684 kernel: SCSI subsystem initialized May 9 23:59:29.538682 kernel: Loading iSCSI transport class v2.0-870. May 9 23:59:29.551698 kernel: iscsi: registered transport (tcp) May 9 23:59:29.574146 kernel: iscsi: registered transport (qla4xxx) May 9 23:59:29.574221 kernel: QLogic iSCSI HBA Driver May 9 23:59:29.681695 kernel: random: crng init done May 9 23:59:29.681908 systemd-resolved[288]: Defaulting to hostname 'linux'. May 9 23:59:29.685412 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:59:29.689514 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:59:29.709613 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:59:29.718966 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:59:29.757719 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:59:29.757796 kernel: device-mapper: uevent: version 1.0.3 May 9 23:59:29.759712 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:59:29.824714 kernel: raid6: neonx8 gen() 6725 MB/s May 9 23:59:29.841690 kernel: raid6: neonx4 gen() 6553 MB/s May 9 23:59:29.858687 kernel: raid6: neonx2 gen() 5460 MB/s May 9 23:59:29.875684 kernel: raid6: neonx1 gen() 3962 MB/s May 9 23:59:29.892684 kernel: raid6: int64x8 gen() 3814 MB/s May 9 23:59:29.909685 kernel: raid6: int64x4 gen() 3723 MB/s May 9 23:59:29.926684 kernel: raid6: int64x2 gen() 3606 MB/s May 9 23:59:29.944481 kernel: raid6: int64x1 gen() 2752 MB/s May 9 23:59:29.944516 kernel: raid6: using algorithm neonx8 gen() 6725 MB/s May 9 23:59:29.962469 kernel: raid6: .... xor() 4801 MB/s, rmw enabled May 9 23:59:29.962509 kernel: raid6: using neon recovery algorithm May 9 23:59:29.970954 kernel: xor: measuring software checksum speed May 9 23:59:29.971015 kernel: 8regs : 10965 MB/sec May 9 23:59:29.972059 kernel: 32regs : 11949 MB/sec May 9 23:59:29.973240 kernel: arm64_neon : 9564 MB/sec May 9 23:59:29.973272 kernel: xor: using function: 32regs (11949 MB/sec) May 9 23:59:30.057697 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:59:30.077224 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:59:30.097898 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:30.136766 systemd-udevd[470]: Using default interface naming scheme 'v255'. May 9 23:59:30.146163 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:30.161234 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:59:30.199048 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation May 9 23:59:30.256428 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:59:30.263912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:59:30.383958 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:30.397007 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:59:30.446780 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:59:30.451948 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:59:30.457847 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:30.470541 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:59:30.485911 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:59:30.533333 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:59:30.563702 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 23:59:30.568075 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 9 23:59:30.572350 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 9 23:59:30.572714 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 9 23:59:30.583817 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:1f:e7:c8:8d:9f May 9 23:59:30.593971 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:30.603343 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:59:30.605752 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:30.608399 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:30.610537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:59:30.610814 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:30.615894 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:30.636142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:30.645894 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 9 23:59:30.645933 kernel: nvme nvme0: pci function 0000:00:04.0 May 9 23:59:30.653728 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 9 23:59:30.663684 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 23:59:30.663760 kernel: GPT:9289727 != 16777215 May 9 23:59:30.663787 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 23:59:30.663812 kernel: GPT:9289727 != 16777215 May 9 23:59:30.663836 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:59:30.663860 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:59:30.678071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:30.685975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:59:30.735945 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:30.780038 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (527) May 9 23:59:30.806170 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (541) May 9 23:59:30.871607 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 9 23:59:30.900977 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 9 23:59:30.918384 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 23:59:30.934041 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 9 23:59:30.938872 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 9 23:59:30.953903 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:59:30.964467 disk-uuid[659]: Primary Header is updated. May 9 23:59:30.964467 disk-uuid[659]: Secondary Entries is updated. May 9 23:59:30.964467 disk-uuid[659]: Secondary Header is updated. May 9 23:59:30.974695 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:59:30.983696 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:59:31.992770 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:59:31.994026 disk-uuid[660]: The operation has completed successfully. May 9 23:59:32.172255 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:59:32.172829 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:59:32.221967 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:59:32.239749 sh[919]: Success May 9 23:59:32.258725 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:59:32.347262 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:59:32.367854 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:59:32.374921 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:59:32.421248 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 9 23:59:32.421311 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:32.421338 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:59:32.424161 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:59:32.424194 kernel: BTRFS info (device dm-0): using free space tree May 9 23:59:32.555690 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 9 23:59:32.578846 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:59:32.582058 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:59:32.591961 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:59:32.607243 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:59:32.645095 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:32.645168 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:32.646569 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:59:32.653919 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:59:32.670642 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:59:32.674764 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:32.685356 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:59:32.697109 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:59:32.775406 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:59:32.788992 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:59:32.845931 systemd-networkd[1112]: lo: Link UP May 9 23:59:32.845954 systemd-networkd[1112]: lo: Gained carrier May 9 23:59:32.851024 systemd-networkd[1112]: Enumeration completed May 9 23:59:32.852536 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:59:32.856923 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:32.856941 systemd-networkd[1112]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:59:32.857118 systemd[1]: Reached target network.target - Network. May 9 23:59:32.868602 systemd-networkd[1112]: eth0: Link UP May 9 23:59:32.868621 systemd-networkd[1112]: eth0: Gained carrier May 9 23:59:32.868639 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:32.880750 systemd-networkd[1112]: eth0: DHCPv4 address 172.31.31.45/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 23:59:33.066814 ignition[1045]: Ignition 2.19.0 May 9 23:59:33.066835 ignition[1045]: Stage: fetch-offline May 9 23:59:33.067330 ignition[1045]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:33.067353 ignition[1045]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:33.068427 ignition[1045]: Ignition finished successfully May 9 23:59:33.077936 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:59:33.088955 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 23:59:33.118948 ignition[1123]: Ignition 2.19.0 May 9 23:59:33.118973 ignition[1123]: Stage: fetch May 9 23:59:33.120395 ignition[1123]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:33.120421 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:33.120579 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:33.137752 ignition[1123]: PUT result: OK May 9 23:59:33.140891 ignition[1123]: parsed url from cmdline: "" May 9 23:59:33.141034 ignition[1123]: no config URL provided May 9 23:59:33.141055 ignition[1123]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:59:33.141081 ignition[1123]: no config at "/usr/lib/ignition/user.ign" May 9 23:59:33.141139 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:33.146760 ignition[1123]: PUT result: OK May 9 23:59:33.149887 ignition[1123]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 9 23:59:33.152514 ignition[1123]: GET result: OK May 9 23:59:33.153734 ignition[1123]: parsing config with SHA512: 14ec2de04afb481a70dd01291592e10b3ed6e80dae037a804865698d72a3ba65c0184e560566a67020799717197976289b5a8f91bccebe6c2a88768c669f1313 May 9 23:59:33.162381 unknown[1123]: fetched base config from "system" May 9 23:59:33.162403 unknown[1123]: fetched base config from "system" May 9 23:59:33.162416 unknown[1123]: fetched user config from "aws" May 9 23:59:33.165991 ignition[1123]: fetch: fetch complete May 9 23:59:33.166003 ignition[1123]: fetch: fetch passed May 9 23:59:33.166093 ignition[1123]: Ignition finished successfully May 9 23:59:33.174382 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 23:59:33.185004 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:59:33.219949 ignition[1129]: Ignition 2.19.0 May 9 23:59:33.219978 ignition[1129]: Stage: kargs May 9 23:59:33.221580 ignition[1129]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:33.221606 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:33.222753 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:33.228747 ignition[1129]: PUT result: OK May 9 23:59:33.237329 ignition[1129]: kargs: kargs passed May 9 23:59:33.237620 ignition[1129]: Ignition finished successfully May 9 23:59:33.242807 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:59:33.253963 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:59:33.288143 ignition[1135]: Ignition 2.19.0 May 9 23:59:33.288178 ignition[1135]: Stage: disks May 9 23:59:33.289186 ignition[1135]: no configs at "/usr/lib/ignition/base.d" May 9 23:59:33.289212 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:33.289362 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:33.292948 ignition[1135]: PUT result: OK May 9 23:59:33.301087 ignition[1135]: disks: disks passed May 9 23:59:33.301241 ignition[1135]: Ignition finished successfully May 9 23:59:33.306752 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:59:33.311125 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:59:33.313436 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:59:33.315477 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:59:33.322871 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:59:33.324796 systemd[1]: Reached target basic.target - Basic System. May 9 23:59:33.335003 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:59:33.397638 systemd-fsck[1143]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 23:59:33.402134 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:59:33.414375 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:59:33.494859 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 9 23:59:33.495863 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:59:33.497030 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:59:33.513845 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:59:33.519895 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:59:33.523864 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 23:59:33.523961 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:59:33.524051 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:59:33.546696 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1162) May 9 23:59:33.550190 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:33.550262 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:33.550291 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:59:33.557096 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:59:33.569713 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:59:33.567366 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:59:33.582153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:59:34.023645 initrd-setup-root[1186]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:59:34.044486 initrd-setup-root[1193]: cut: /sysroot/etc/group: No such file or directory May 9 23:59:34.047681 systemd-networkd[1112]: eth0: Gained IPv6LL May 9 23:59:34.056093 initrd-setup-root[1200]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:59:34.063725 initrd-setup-root[1207]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:59:34.355531 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:59:34.364877 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:59:34.372995 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:59:34.393112 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:59:34.395380 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:34.438329 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:59:34.444815 ignition[1275]: INFO : Ignition 2.19.0 May 9 23:59:34.447350 ignition[1275]: INFO : Stage: mount May 9 23:59:34.447350 ignition[1275]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:34.447350 ignition[1275]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:34.447350 ignition[1275]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:34.455162 ignition[1275]: INFO : PUT result: OK May 9 23:59:34.459719 ignition[1275]: INFO : mount: mount passed May 9 23:59:34.461575 ignition[1275]: INFO : Ignition finished successfully May 9 23:59:34.464877 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:59:34.479855 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:59:34.507626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:59:34.528686 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1287) May 9 23:59:34.533689 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:59:34.533739 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:59:34.533777 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:59:34.539688 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:59:34.543626 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:59:34.578178 ignition[1303]: INFO : Ignition 2.19.0 May 9 23:59:34.578178 ignition[1303]: INFO : Stage: files May 9 23:59:34.582467 ignition[1303]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:34.582467 ignition[1303]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:34.582467 ignition[1303]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:34.582467 ignition[1303]: INFO : PUT result: OK May 9 23:59:34.594677 ignition[1303]: DEBUG : files: compiled without relabeling support, skipping May 9 23:59:34.597695 ignition[1303]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:59:34.597695 ignition[1303]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:59:34.604422 ignition[1303]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:59:34.607155 ignition[1303]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:59:34.610466 unknown[1303]: wrote ssh authorized keys file for user: core May 9 23:59:34.613040 ignition[1303]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:59:34.626722 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 23:59:34.626722 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 23:59:34.626722 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:59:34.626722 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 9 23:59:34.740060 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 23:59:34.984060 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:59:34.987873 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 23:59:34.987873 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:59:34.987873 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 23:59:34.998019 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 23:59:34.998019 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:59:34.998019 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:59:35.008272 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:59:35.008272 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:59:35.015011 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:59:35.018482 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:59:35.022776 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:35.022776 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:35.022776 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:35.022776 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 9 23:59:35.515218 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 23:59:35.875084 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:59:35.875084 ignition[1303]: INFO : files: op(c): [started] processing unit "containerd.service" May 9 23:59:35.881851 ignition[1303]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 23:59:35.881851 ignition[1303]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 23:59:35.881851 ignition[1303]: INFO : files: op(c): [finished] processing unit "containerd.service" May 9 23:59:35.881851 ignition[1303]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 9 23:59:35.881851 ignition[1303]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:59:35.881851 ignition[1303]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:59:35.881851 ignition[1303]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 9 23:59:35.881851 ignition[1303]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 9 23:59:35.881851 ignition[1303]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 9 23:59:35.881851 ignition[1303]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:59:35.912562 ignition[1303]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:59:35.912562 ignition[1303]: INFO : files: files passed May 9 23:59:35.912562 ignition[1303]: INFO : Ignition finished successfully May 9 23:59:35.920075 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:59:35.927072 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:59:35.940258 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:59:35.950460 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:59:35.952406 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:59:35.970763 initrd-setup-root-after-ignition[1332]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:35.970763 initrd-setup-root-after-ignition[1332]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:35.978246 initrd-setup-root-after-ignition[1336]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:59:35.983924 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:59:35.989140 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:59:36.004030 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:59:36.051864 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:59:36.052325 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:59:36.060821 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:59:36.063115 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:59:36.066853 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:59:36.089113 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:59:36.123297 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:59:36.136911 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:59:36.160959 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:59:36.163960 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:36.168243 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:59:36.171811 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:59:36.172172 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:59:36.180024 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:59:36.182353 systemd[1]: Stopped target basic.target - Basic System. May 9 23:59:36.188350 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:59:36.190965 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:59:36.195521 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:59:36.199611 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:59:36.202849 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:59:36.205530 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:59:36.209088 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:59:36.211170 systemd[1]: Stopped target swap.target - Swaps. May 9 23:59:36.213599 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:59:36.213972 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:59:36.223112 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:59:36.226928 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:36.236209 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:59:36.239745 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:36.244722 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:59:36.245501 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:59:36.251141 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:59:36.251409 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:59:36.253897 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:59:36.254111 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:59:36.278139 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:59:36.280902 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:59:36.281225 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:36.293192 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:59:36.296863 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:59:36.297161 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:36.299608 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:59:36.299889 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:59:36.321994 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:59:36.324165 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:59:36.362307 ignition[1356]: INFO : Ignition 2.19.0 May 9 23:59:36.362307 ignition[1356]: INFO : Stage: umount May 9 23:59:36.365936 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:59:36.365936 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:59:36.370261 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:59:36.374498 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:59:36.377301 ignition[1356]: INFO : PUT result: OK May 9 23:59:36.387537 ignition[1356]: INFO : umount: umount passed May 9 23:59:36.389205 ignition[1356]: INFO : Ignition finished successfully May 9 23:59:36.389735 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:59:36.390842 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:59:36.398222 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:59:36.401347 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:59:36.404969 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:59:36.405060 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:59:36.408812 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:59:36.408904 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:59:36.409474 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 23:59:36.409548 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 23:59:36.410218 systemd[1]: Stopped target network.target - Network. May 9 23:59:36.413086 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:59:36.413174 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:59:36.413805 systemd[1]: Stopped target paths.target - Path Units. May 9 23:59:36.414491 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:59:36.438172 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:36.441228 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:59:36.442949 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:59:36.444811 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:59:36.444891 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:59:36.447354 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:59:36.447425 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:59:36.465853 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:59:36.465984 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:59:36.468974 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:59:36.469089 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:59:36.472960 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:59:36.473081 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:59:36.483132 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:59:36.486482 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:59:36.495756 systemd-networkd[1112]: eth0: DHCPv6 lease lost May 9 23:59:36.500509 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:59:36.501105 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:59:36.508062 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:59:36.510264 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:59:36.517443 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:59:36.518461 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:36.530036 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:59:36.531956 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:59:36.533844 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:59:36.541819 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:59:36.541929 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:36.544154 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:59:36.544266 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:59:36.546975 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:59:36.547092 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:36.549933 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:36.582463 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:59:36.585227 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:36.590814 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:59:36.591065 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:59:36.595162 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:59:36.595309 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:59:36.599102 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:59:36.599181 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:36.601205 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:59:36.601296 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:59:36.605152 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:59:36.605248 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:59:36.609234 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:59:36.609331 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:59:36.629059 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:59:36.634174 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:59:36.634313 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:36.636753 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:59:36.636865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:36.668287 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:59:36.668768 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:59:36.677192 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:59:36.689819 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:59:36.732279 systemd[1]: Switching root. May 9 23:59:36.770426 systemd-journald[251]: Journal stopped May 9 23:59:39.178550 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). May 9 23:59:39.182715 kernel: SELinux: policy capability network_peer_controls=1 May 9 23:59:39.182776 kernel: SELinux: policy capability open_perms=1 May 9 23:59:39.182810 kernel: SELinux: policy capability extended_socket_class=1 May 9 23:59:39.182841 kernel: SELinux: policy capability always_check_network=0 May 9 23:59:39.182871 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 23:59:39.182904 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 23:59:39.182941 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 23:59:39.182971 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 23:59:39.183002 kernel: audit: type=1403 audit(1746835177.429:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 23:59:39.183044 systemd[1]: Successfully loaded SELinux policy in 50.118ms. May 9 23:59:39.183094 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.956ms. May 9 23:59:39.183130 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:59:39.183161 systemd[1]: Detected virtualization amazon. May 9 23:59:39.183193 systemd[1]: Detected architecture arm64. May 9 23:59:39.183224 systemd[1]: Detected first boot. May 9 23:59:39.183259 systemd[1]: Initializing machine ID from VM UUID. May 9 23:59:39.183293 zram_generator::config[1419]: No configuration found. May 9 23:59:39.183330 systemd[1]: Populated /etc with preset unit settings. May 9 23:59:39.183362 systemd[1]: Queued start job for default target multi-user.target. May 9 23:59:39.183397 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 9 23:59:39.183432 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 23:59:39.183466 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 23:59:39.183498 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 23:59:39.183533 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 23:59:39.183567 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 23:59:39.183599 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 23:59:39.183629 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 23:59:39.183686 systemd[1]: Created slice user.slice - User and Session Slice. May 9 23:59:39.183720 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:59:39.183761 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:59:39.183795 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 23:59:39.183827 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 23:59:39.183864 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 23:59:39.183895 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:59:39.183926 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 23:59:39.183958 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:59:39.183989 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 23:59:39.184019 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:59:39.184050 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:59:39.184081 systemd[1]: Reached target slices.target - Slice Units. May 9 23:59:39.184117 systemd[1]: Reached target swap.target - Swaps. May 9 23:59:39.184147 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 23:59:39.184179 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 23:59:39.184211 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:59:39.184240 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:59:39.184270 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:59:39.184299 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:59:39.184332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:59:39.184361 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 23:59:39.184395 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 23:59:39.184427 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 23:59:39.184458 systemd[1]: Mounting media.mount - External Media Directory... May 9 23:59:39.184488 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 23:59:39.184518 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 23:59:39.184549 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 23:59:39.184578 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 23:59:39.184610 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:59:39.185781 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:59:39.185832 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 23:59:39.185862 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:59:39.185892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:59:39.185921 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:59:39.185950 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 23:59:39.185982 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:59:39.186014 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 23:59:39.186046 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 9 23:59:39.186081 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 9 23:59:39.186113 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:59:39.186144 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:59:39.186189 kernel: loop: module loaded May 9 23:59:39.186220 kernel: ACPI: bus type drm_connector registered May 9 23:59:39.186249 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 23:59:39.186278 kernel: fuse: init (API version 7.39) May 9 23:59:39.186306 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 23:59:39.186335 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:59:39.186368 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 23:59:39.186404 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 23:59:39.186436 systemd[1]: Mounted media.mount - External Media Directory. May 9 23:59:39.186467 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 23:59:39.186496 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 23:59:39.186525 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 23:59:39.186555 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:59:39.186584 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 23:59:39.186616 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 23:59:39.187693 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 23:59:39.187762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:59:39.187793 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:59:39.187823 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:59:39.187853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:59:39.187894 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:59:39.187926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:59:39.188015 systemd-journald[1516]: Collecting audit messages is disabled. May 9 23:59:39.188078 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 23:59:39.188115 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 23:59:39.188149 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:59:39.188179 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:59:39.188213 systemd-journald[1516]: Journal started May 9 23:59:39.188263 systemd-journald[1516]: Runtime Journal (/run/log/journal/ec2eb71b5a58c466ee086101750b3e08) is 8.0M, max 75.3M, 67.3M free. May 9 23:59:39.194886 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:59:39.198870 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:59:39.201635 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 23:59:39.205743 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 23:59:39.232201 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 23:59:39.241900 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 23:59:39.257884 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 23:59:39.260839 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 23:59:39.283995 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 23:59:39.288965 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 23:59:39.291238 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:59:39.300915 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 23:59:39.303176 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:59:39.318949 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:39.328104 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:59:39.344641 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 23:59:39.347392 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 23:59:39.371814 systemd-journald[1516]: Time spent on flushing to /var/log/journal/ec2eb71b5a58c466ee086101750b3e08 is 71.697ms for 893 entries. May 9 23:59:39.371814 systemd-journald[1516]: System Journal (/var/log/journal/ec2eb71b5a58c466ee086101750b3e08) is 8.0M, max 195.6M, 187.6M free. May 9 23:59:39.460515 systemd-journald[1516]: Received client request to flush runtime journal. May 9 23:59:39.366466 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:59:39.385044 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 23:59:39.389691 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 23:59:39.397047 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 23:59:39.436001 udevadm[1574]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 9 23:59:39.446600 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:39.466340 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 23:59:39.488917 systemd-tmpfiles[1568]: ACLs are not supported, ignoring. May 9 23:59:39.488956 systemd-tmpfiles[1568]: ACLs are not supported, ignoring. May 9 23:59:39.500026 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:59:39.509001 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 23:59:39.574360 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 23:59:39.588924 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:59:39.618141 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. May 9 23:59:39.618726 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. May 9 23:59:39.629428 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:59:40.310423 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 23:59:40.328989 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:59:40.376175 systemd-udevd[1596]: Using default interface naming scheme 'v255'. May 9 23:59:40.424966 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:59:40.440070 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:59:40.467928 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 23:59:40.597803 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 23:59:40.615512 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 9 23:59:40.619050 (udev-worker)[1607]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:40.763431 systemd-networkd[1599]: lo: Link UP May 9 23:59:40.763444 systemd-networkd[1599]: lo: Gained carrier May 9 23:59:40.767465 systemd-networkd[1599]: Enumeration completed May 9 23:59:40.767853 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:59:40.775551 systemd-networkd[1599]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:40.775565 systemd-networkd[1599]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:59:40.778358 systemd-networkd[1599]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:40.778596 systemd-networkd[1599]: eth0: Link UP May 9 23:59:40.779254 systemd-networkd[1599]: eth0: Gained carrier May 9 23:59:40.779774 systemd-networkd[1599]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:59:40.781979 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 23:59:40.789770 systemd-networkd[1599]: eth0: DHCPv4 address 172.31.31.45/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 23:59:40.882887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:59:40.894695 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1613) May 9 23:59:41.072520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:59:41.105090 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 23:59:41.133528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 23:59:41.141962 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 23:59:41.190703 lvm[1725]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:59:41.228296 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 23:59:41.231051 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:59:41.245164 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 23:59:41.254592 lvm[1728]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:59:41.292454 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 23:59:41.296635 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:59:41.299305 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 23:59:41.299491 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:59:41.301855 systemd[1]: Reached target machines.target - Containers. May 9 23:59:41.305810 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 23:59:41.313991 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 23:59:41.327737 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 23:59:41.331012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:59:41.336001 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 23:59:41.341072 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 23:59:41.357459 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 23:59:41.361137 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 23:59:41.401898 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 23:59:41.404847 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 23:59:41.413214 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 23:59:41.422748 kernel: loop0: detected capacity change from 0 to 114328 May 9 23:59:41.528736 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 23:59:41.556700 kernel: loop1: detected capacity change from 0 to 194096 May 9 23:59:41.674203 kernel: loop2: detected capacity change from 0 to 52536 May 9 23:59:41.783755 kernel: loop3: detected capacity change from 0 to 114432 May 9 23:59:41.887714 kernel: loop4: detected capacity change from 0 to 114328 May 9 23:59:41.899702 kernel: loop5: detected capacity change from 0 to 194096 May 9 23:59:41.930723 kernel: loop6: detected capacity change from 0 to 52536 May 9 23:59:41.942693 kernel: loop7: detected capacity change from 0 to 114432 May 9 23:59:41.953773 (sd-merge)[1749]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 9 23:59:41.954825 (sd-merge)[1749]: Merged extensions into '/usr'. May 9 23:59:41.962366 systemd[1]: Reloading requested from client PID 1736 ('systemd-sysext') (unit systemd-sysext.service)... May 9 23:59:41.962577 systemd[1]: Reloading... May 9 23:59:42.106009 zram_generator::config[1780]: No configuration found. May 9 23:59:42.377801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:59:42.530798 systemd[1]: Reloading finished in 567 ms. May 9 23:59:42.557804 systemd-networkd[1599]: eth0: Gained IPv6LL May 9 23:59:42.572198 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 23:59:42.576024 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 23:59:42.599034 systemd[1]: Starting ensure-sysext.service... May 9 23:59:42.609141 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:59:42.626365 systemd[1]: Reloading requested from client PID 1837 ('systemctl') (unit ensure-sysext.service)... May 9 23:59:42.626399 systemd[1]: Reloading... May 9 23:59:42.661817 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 23:59:42.663622 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 23:59:42.665697 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 23:59:42.666354 systemd-tmpfiles[1838]: ACLs are not supported, ignoring. May 9 23:59:42.666600 systemd-tmpfiles[1838]: ACLs are not supported, ignoring. May 9 23:59:42.673866 systemd-tmpfiles[1838]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:59:42.674076 systemd-tmpfiles[1838]: Skipping /boot May 9 23:59:42.704519 systemd-tmpfiles[1838]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:59:42.704552 systemd-tmpfiles[1838]: Skipping /boot May 9 23:59:42.760737 zram_generator::config[1867]: No configuration found. May 9 23:59:42.789191 ldconfig[1732]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 23:59:43.043446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:59:43.196527 systemd[1]: Reloading finished in 569 ms. May 9 23:59:43.220918 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 23:59:43.233754 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:59:43.254998 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 23:59:43.271046 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 23:59:43.277957 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 23:59:43.293027 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:59:43.306982 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 23:59:43.325101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:59:43.333192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:59:43.340140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:59:43.360109 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:59:43.364287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:59:43.384957 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 23:59:43.411997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:59:43.412422 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:59:43.429583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:59:43.432046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:59:43.436630 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:59:43.445335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:59:43.472085 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 23:59:43.484179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:59:43.492284 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:59:43.508032 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:59:43.511773 augenrules[1967]: No rules May 9 23:59:43.526566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:59:43.547238 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:59:43.549422 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:59:43.549809 systemd[1]: Reached target time-set.target - System Time Set. May 9 23:59:43.563307 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 23:59:43.569503 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 23:59:43.574901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:59:43.575296 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:59:43.583980 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 23:59:43.590419 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:59:43.590837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:59:43.601738 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:59:43.602174 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:59:43.609088 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:59:43.609447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:59:43.627927 systemd[1]: Finished ensure-sysext.service. May 9 23:59:43.638113 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:59:43.640884 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:59:43.640941 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:59:43.642158 systemd-resolved[1934]: Positive Trust Anchors: May 9 23:59:43.642763 systemd-resolved[1934]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:59:43.642836 systemd-resolved[1934]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:59:43.648483 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 23:59:43.657123 systemd-resolved[1934]: Defaulting to hostname 'linux'. May 9 23:59:43.660414 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:59:43.662894 systemd[1]: Reached target network.target - Network. May 9 23:59:43.664800 systemd[1]: Reached target network-online.target - Network is Online. May 9 23:59:43.666907 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:59:43.669155 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:59:43.671303 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 23:59:43.673669 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 23:59:43.676248 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 23:59:43.678479 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 23:59:43.680839 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 23:59:43.683397 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 23:59:43.683462 systemd[1]: Reached target paths.target - Path Units. May 9 23:59:43.685184 systemd[1]: Reached target timers.target - Timer Units. May 9 23:59:43.688119 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 23:59:43.692749 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 23:59:43.696934 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 23:59:43.709342 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 23:59:43.711715 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:59:43.713798 systemd[1]: Reached target basic.target - Basic System. May 9 23:59:43.716064 systemd[1]: System is tainted: cgroupsv1 May 9 23:59:43.716285 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 23:59:43.716452 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 23:59:43.726024 systemd[1]: Starting containerd.service - containerd container runtime... May 9 23:59:43.733437 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 9 23:59:43.749017 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 23:59:43.754843 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 23:59:43.760981 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 23:59:43.763205 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 23:59:43.772833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:59:43.786073 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 23:59:43.806012 systemd[1]: Started ntpd.service - Network Time Service. May 9 23:59:43.823121 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 23:59:43.836893 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 23:59:43.854849 systemd[1]: Starting setup-oem.service - Setup OEM... May 9 23:59:43.860690 jq[1998]: false May 9 23:59:43.885179 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 23:59:43.901026 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 23:59:43.915542 dbus-daemon[1997]: [system] SELinux support is enabled May 9 23:59:43.927133 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 23:59:43.932890 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 23:59:43.937505 dbus-daemon[1997]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1599 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 9 23:59:43.942355 extend-filesystems[1999]: Found loop4 May 9 23:59:43.942355 extend-filesystems[1999]: Found loop5 May 9 23:59:43.942355 extend-filesystems[1999]: Found loop6 May 9 23:59:43.942355 extend-filesystems[1999]: Found loop7 May 9 23:59:43.942355 extend-filesystems[1999]: Found nvme0n1 May 9 23:59:43.942355 extend-filesystems[1999]: Found nvme0n1p1 May 9 23:59:43.942355 extend-filesystems[1999]: Found nvme0n1p2 May 9 23:59:43.942355 extend-filesystems[1999]: Found nvme0n1p3 May 9 23:59:43.942355 extend-filesystems[1999]: Found usr May 9 23:59:43.942355 extend-filesystems[1999]: Found nvme0n1p4 May 9 23:59:43.942355 extend-filesystems[1999]: Found nvme0n1p6 May 9 23:59:43.942355 extend-filesystems[1999]: Found nvme0n1p7 May 9 23:59:43.942355 extend-filesystems[1999]: Found nvme0n1p9 May 9 23:59:43.942355 extend-filesystems[1999]: Checking size of /dev/nvme0n1p9 May 9 23:59:43.983974 systemd[1]: Starting update-engine.service - Update Engine... May 9 23:59:43.993893 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 23:59:43.999493 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 23:59:44.022213 ntpd[2003]: ntpd 4.2.8p17@1.4004-o Fri May 9 22:02:28 UTC 2025 (1): Starting May 9 23:59:44.023235 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: ntpd 4.2.8p17@1.4004-o Fri May 9 22:02:28 UTC 2025 (1): Starting May 9 23:59:44.023235 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 9 23:59:44.023235 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: ---------------------------------------------------- May 9 23:59:44.023235 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: ntp-4 is maintained by Network Time Foundation, May 9 23:59:44.023235 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 9 23:59:44.023235 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: corporation. Support and training for ntp-4 are May 9 23:59:44.023235 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: available at https://www.nwtime.org/support May 9 23:59:44.023235 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: ---------------------------------------------------- May 9 23:59:44.022269 ntpd[2003]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 9 23:59:44.022289 ntpd[2003]: ---------------------------------------------------- May 9 23:59:44.022308 ntpd[2003]: ntp-4 is maintained by Network Time Foundation, May 9 23:59:44.022326 ntpd[2003]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 9 23:59:44.022344 ntpd[2003]: corporation. Support and training for ntp-4 are May 9 23:59:44.024348 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 23:59:44.022363 ntpd[2003]: available at https://www.nwtime.org/support May 9 23:59:44.022381 ntpd[2003]: ---------------------------------------------------- May 9 23:59:44.027686 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 23:59:44.036884 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: proto: precision = 0.096 usec (-23) May 9 23:59:44.035379 ntpd[2003]: proto: precision = 0.096 usec (-23) May 9 23:59:44.037441 ntpd[2003]: basedate set to 2025-04-27 May 9 23:59:44.037579 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: basedate set to 2025-04-27 May 9 23:59:44.037674 ntpd[2003]: gps base set to 2025-04-27 (week 2364) May 9 23:59:44.037780 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: gps base set to 2025-04-27 (week 2364) May 9 23:59:44.042581 ntpd[2003]: Listen and drop on 0 v6wildcard [::]:123 May 9 23:59:44.042790 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: Listen and drop on 0 v6wildcard [::]:123 May 9 23:59:44.042938 ntpd[2003]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 9 23:59:44.043063 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 9 23:59:44.043410 ntpd[2003]: Listen normally on 2 lo 127.0.0.1:123 May 9 23:59:44.043776 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: Listen normally on 2 lo 127.0.0.1:123 May 9 23:59:44.043776 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: Listen normally on 3 eth0 172.31.31.45:123 May 9 23:59:44.043776 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: Listen normally on 4 lo [::1]:123 May 9 23:59:44.043776 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: Listen normally on 5 eth0 [fe80::41f:e7ff:fec8:8d9f%2]:123 May 9 23:59:44.043481 ntpd[2003]: Listen normally on 3 eth0 172.31.31.45:123 May 9 23:59:44.043547 ntpd[2003]: Listen normally on 4 lo [::1]:123 May 9 23:59:44.043621 ntpd[2003]: Listen normally on 5 eth0 [fe80::41f:e7ff:fec8:8d9f%2]:123 May 9 23:59:44.044201 ntpd[2003]: Listening on routing socket on fd #22 for interface updates May 9 23:59:44.044489 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: Listening on routing socket on fd #22 for interface updates May 9 23:59:44.047577 extend-filesystems[1999]: Resized partition /dev/nvme0n1p9 May 9 23:59:44.061899 extend-filesystems[2041]: resize2fs 1.47.1 (20-May-2024) May 9 23:59:44.099198 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 9 23:59:44.091532 systemd[1]: motdgen.service: Deactivated successfully. May 9 23:59:44.106826 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:59:44.106826 ntpd[2003]: 9 May 23:59:44 ntpd[2003]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:59:44.065879 ntpd[2003]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:59:44.098533 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 23:59:44.065928 ntpd[2003]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:59:44.111618 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 23:59:44.113352 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 23:59:44.143843 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 23:59:44.166429 update_engine[2025]: I20250509 23:59:44.166241 2025 main.cc:92] Flatcar Update Engine starting May 9 23:59:44.172273 jq[2031]: true May 9 23:59:44.174352 (ntainerd)[2047]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 23:59:44.200212 update_engine[2025]: I20250509 23:59:44.199002 2025 update_check_scheduler.cc:74] Next update check in 11m15s May 9 23:59:44.208582 systemd[1]: Started update-engine.service - Update Engine. May 9 23:59:44.208240 dbus-daemon[1997]: [system] Successfully activated service 'org.freedesktop.systemd1' May 9 23:59:44.219434 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 23:59:44.226394 coreos-metadata[1995]: May 09 23:59:44.222 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 9 23:59:44.219497 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 23:59:44.235840 coreos-metadata[1995]: May 09 23:59:44.227 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 9 23:59:44.235840 coreos-metadata[1995]: May 09 23:59:44.232 INFO Fetch successful May 9 23:59:44.235840 coreos-metadata[1995]: May 09 23:59:44.232 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 9 23:59:44.235840 coreos-metadata[1995]: May 09 23:59:44.235 INFO Fetch successful May 9 23:59:44.235840 coreos-metadata[1995]: May 09 23:59:44.235 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 9 23:59:44.247122 coreos-metadata[1995]: May 09 23:59:44.244 INFO Fetch successful May 9 23:59:44.247122 coreos-metadata[1995]: May 09 23:59:44.244 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 9 23:59:44.239041 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 9 23:59:44.241875 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 23:59:44.241913 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 23:59:44.248365 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 23:59:44.259501 coreos-metadata[1995]: May 09 23:59:44.253 INFO Fetch successful May 9 23:59:44.259501 coreos-metadata[1995]: May 09 23:59:44.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 9 23:59:44.259501 coreos-metadata[1995]: May 09 23:59:44.258 INFO Fetch failed with 404: resource not found May 9 23:59:44.259501 coreos-metadata[1995]: May 09 23:59:44.258 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 9 23:59:44.251844 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 23:59:44.275627 coreos-metadata[1995]: May 09 23:59:44.265 INFO Fetch successful May 9 23:59:44.275627 coreos-metadata[1995]: May 09 23:59:44.265 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 9 23:59:44.277970 coreos-metadata[1995]: May 09 23:59:44.277 INFO Fetch successful May 9 23:59:44.277970 coreos-metadata[1995]: May 09 23:59:44.277 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 9 23:59:44.280961 coreos-metadata[1995]: May 09 23:59:44.279 INFO Fetch successful May 9 23:59:44.280961 coreos-metadata[1995]: May 09 23:59:44.279 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 9 23:59:44.284702 coreos-metadata[1995]: May 09 23:59:44.281 INFO Fetch successful May 9 23:59:44.284847 tar[2044]: linux-arm64/helm May 9 23:59:44.300108 coreos-metadata[1995]: May 09 23:59:44.285 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 9 23:59:44.300108 coreos-metadata[1995]: May 09 23:59:44.295 INFO Fetch successful May 9 23:59:44.317707 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 9 23:59:44.317865 jq[2056]: true May 9 23:59:44.353012 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 23:59:44.359621 extend-filesystems[2041]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 9 23:59:44.359621 extend-filesystems[2041]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 23:59:44.359621 extend-filesystems[2041]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 9 23:59:44.353511 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 23:59:44.374377 extend-filesystems[1999]: Resized filesystem in /dev/nvme0n1p9 May 9 23:59:44.374366 systemd[1]: Finished setup-oem.service - Setup OEM. May 9 23:59:44.382163 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 9 23:59:44.431573 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 9 23:59:44.446934 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 23:59:44.602701 bash[2116]: Updated "/home/core/.ssh/authorized_keys" May 9 23:59:44.609892 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 23:59:44.623867 systemd[1]: Starting sshkeys.service... May 9 23:59:44.698849 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 9 23:59:44.708364 systemd-logind[2020]: Watching system buttons on /dev/input/event0 (Power Button) May 9 23:59:44.724526 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2103) May 9 23:59:44.710070 systemd-logind[2020]: Watching system buttons on /dev/input/event1 (Sleep Button) May 9 23:59:44.711790 systemd-logind[2020]: New seat seat0. May 9 23:59:44.724499 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 9 23:59:44.728379 systemd[1]: Started systemd-logind.service - User Login Management. May 9 23:59:44.760354 amazon-ssm-agent[2085]: Initializing new seelog logger May 9 23:59:44.772398 amazon-ssm-agent[2085]: New Seelog Logger Creation Complete May 9 23:59:44.772398 amazon-ssm-agent[2085]: 2025/05/09 23:59:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:59:44.772398 amazon-ssm-agent[2085]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:59:44.772398 amazon-ssm-agent[2085]: 2025/05/09 23:59:44 processing appconfig overrides May 9 23:59:44.772398 amazon-ssm-agent[2085]: 2025/05/09 23:59:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:59:44.772398 amazon-ssm-agent[2085]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:59:44.772398 amazon-ssm-agent[2085]: 2025/05/09 23:59:44 processing appconfig overrides May 9 23:59:44.779747 locksmithd[2068]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 23:59:44.786387 amazon-ssm-agent[2085]: 2025/05/09 23:59:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:59:44.786387 amazon-ssm-agent[2085]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:59:44.786387 amazon-ssm-agent[2085]: 2025/05/09 23:59:44 processing appconfig overrides May 9 23:59:44.786387 amazon-ssm-agent[2085]: 2025-05-09 23:59:44 INFO Proxy environment variables: May 9 23:59:44.796076 amazon-ssm-agent[2085]: 2025/05/09 23:59:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:59:44.797702 amazon-ssm-agent[2085]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:59:44.797702 amazon-ssm-agent[2085]: 2025/05/09 23:59:44 processing appconfig overrides May 9 23:59:44.889137 amazon-ssm-agent[2085]: 2025-05-09 23:59:44 INFO https_proxy: May 9 23:59:44.964521 dbus-daemon[1997]: [system] Successfully activated service 'org.freedesktop.hostname1' May 9 23:59:44.965323 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 9 23:59:44.970807 dbus-daemon[1997]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2067 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 9 23:59:44.985246 amazon-ssm-agent[2085]: 2025-05-09 23:59:44 INFO http_proxy: May 9 23:59:45.091726 amazon-ssm-agent[2085]: 2025-05-09 23:59:44 INFO no_proxy: May 9 23:59:45.103920 containerd[2047]: time="2025-05-09T23:59:45.103167658Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 23:59:45.159955 systemd[1]: Starting polkit.service - Authorization Manager... May 9 23:59:45.199151 amazon-ssm-agent[2085]: 2025-05-09 23:59:44 INFO Checking if agent identity type OnPrem can be assumed May 9 23:59:45.199211 containerd[2047]: time="2025-05-09T23:59:45.197971114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 23:59:45.212912 coreos-metadata[2128]: May 09 23:59:45.212 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 9 23:59:45.215192 coreos-metadata[2128]: May 09 23:59:45.214 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 9 23:59:45.215481 coreos-metadata[2128]: May 09 23:59:45.215 INFO Fetch successful May 9 23:59:45.215555 coreos-metadata[2128]: May 09 23:59:45.215 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 9 23:59:45.219476 containerd[2047]: time="2025-05-09T23:59:45.215889875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 23:59:45.219476 containerd[2047]: time="2025-05-09T23:59:45.215999111Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 23:59:45.219476 containerd[2047]: time="2025-05-09T23:59:45.216037427Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 23:59:45.219476 containerd[2047]: time="2025-05-09T23:59:45.216512363Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 23:59:45.219476 containerd[2047]: time="2025-05-09T23:59:45.216552803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 23:59:45.219476 containerd[2047]: time="2025-05-09T23:59:45.216722003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:59:45.219476 containerd[2047]: time="2025-05-09T23:59:45.216755051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 23:59:45.219476 containerd[2047]: time="2025-05-09T23:59:45.217140971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:59:45.219476 containerd[2047]: time="2025-05-09T23:59:45.217174007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 23:59:45.219476 containerd[2047]: time="2025-05-09T23:59:45.217204943Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:59:45.219476 containerd[2047]: time="2025-05-09T23:59:45.217232303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 23:59:45.220079 coreos-metadata[2128]: May 09 23:59:45.218 INFO Fetch successful May 9 23:59:45.220142 containerd[2047]: time="2025-05-09T23:59:45.217386191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 23:59:45.227293 containerd[2047]: time="2025-05-09T23:59:45.227240555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 23:59:45.228334 unknown[2128]: wrote ssh authorized keys file for user: core May 9 23:59:45.232024 containerd[2047]: time="2025-05-09T23:59:45.227810819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:59:45.232024 containerd[2047]: time="2025-05-09T23:59:45.231388247Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 23:59:45.232024 containerd[2047]: time="2025-05-09T23:59:45.231636179Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 23:59:45.232024 containerd[2047]: time="2025-05-09T23:59:45.231780971Z" level=info msg="metadata content store policy set" policy=shared May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.255809183Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.255935111Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.255982043Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.256019735Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.256074035Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.256383143Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.257022191Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.257273963Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.257311355Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.257349815Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.257386727Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.257417495Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.257447591Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 23:59:45.258482 containerd[2047]: time="2025-05-09T23:59:45.257481071Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 23:59:45.259174 containerd[2047]: time="2025-05-09T23:59:45.257515163Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 23:59:45.259174 containerd[2047]: time="2025-05-09T23:59:45.257547695Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 23:59:45.259174 containerd[2047]: time="2025-05-09T23:59:45.257577095Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 23:59:45.259174 containerd[2047]: time="2025-05-09T23:59:45.257610023Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 23:59:45.271181 containerd[2047]: time="2025-05-09T23:59:45.270756287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271181 containerd[2047]: time="2025-05-09T23:59:45.270868235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271181 containerd[2047]: time="2025-05-09T23:59:45.270914291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271181 containerd[2047]: time="2025-05-09T23:59:45.270952235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271181 containerd[2047]: time="2025-05-09T23:59:45.270995315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271181 containerd[2047]: time="2025-05-09T23:59:45.271038491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271181 containerd[2047]: time="2025-05-09T23:59:45.271090367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271181 containerd[2047]: time="2025-05-09T23:59:45.271133039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271181 containerd[2047]: time="2025-05-09T23:59:45.271175927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271691 containerd[2047]: time="2025-05-09T23:59:45.271224119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271691 containerd[2047]: time="2025-05-09T23:59:45.271262543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271691 containerd[2047]: time="2025-05-09T23:59:45.271306103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271691 containerd[2047]: time="2025-05-09T23:59:45.271390199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271691 containerd[2047]: time="2025-05-09T23:59:45.271440671Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 23:59:45.271691 containerd[2047]: time="2025-05-09T23:59:45.271501955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271691 containerd[2047]: time="2025-05-09T23:59:45.271550423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 23:59:45.271691 containerd[2047]: time="2025-05-09T23:59:45.271589303Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 23:59:45.272059 containerd[2047]: time="2025-05-09T23:59:45.271773635Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 23:59:45.272059 containerd[2047]: time="2025-05-09T23:59:45.271825523Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 23:59:45.272059 containerd[2047]: time="2025-05-09T23:59:45.271857227Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 23:59:45.272059 containerd[2047]: time="2025-05-09T23:59:45.271897691Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 23:59:45.272059 containerd[2047]: time="2025-05-09T23:59:45.271932539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 23:59:45.272059 containerd[2047]: time="2025-05-09T23:59:45.271972403Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 23:59:45.272059 containerd[2047]: time="2025-05-09T23:59:45.271998551Z" level=info msg="NRI interface is disabled by configuration." May 9 23:59:45.272059 containerd[2047]: time="2025-05-09T23:59:45.272032871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 23:59:45.283751 containerd[2047]: time="2025-05-09T23:59:45.272611571Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 23:59:45.284031 containerd[2047]: time="2025-05-09T23:59:45.283001399Z" level=info msg="Connect containerd service" May 9 23:59:45.287714 containerd[2047]: time="2025-05-09T23:59:45.286882739Z" level=info msg="using legacy CRI server" May 9 23:59:45.287714 containerd[2047]: time="2025-05-09T23:59:45.287006195Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 23:59:45.287714 containerd[2047]: time="2025-05-09T23:59:45.287584403Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 23:59:45.291670 amazon-ssm-agent[2085]: 2025-05-09 23:59:44 INFO Checking if agent identity type EC2 can be assumed May 9 23:59:45.301683 containerd[2047]: time="2025-05-09T23:59:45.300248531Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:59:45.301683 containerd[2047]: time="2025-05-09T23:59:45.300497927Z" level=info msg="Start subscribing containerd event" May 9 23:59:45.303845 containerd[2047]: time="2025-05-09T23:59:45.302697071Z" level=info msg="Start recovering state" May 9 23:59:45.303845 containerd[2047]: time="2025-05-09T23:59:45.302910155Z" level=info msg="Start event monitor" May 9 23:59:45.303845 containerd[2047]: time="2025-05-09T23:59:45.303723395Z" level=info msg="Start snapshots syncer" May 9 23:59:45.303845 containerd[2047]: time="2025-05-09T23:59:45.303753599Z" level=info msg="Start cni network conf syncer for default" May 9 23:59:45.303845 containerd[2047]: time="2025-05-09T23:59:45.303773771Z" level=info msg="Start streaming server" May 9 23:59:45.305610 polkitd[2175]: Started polkitd version 121 May 9 23:59:45.311694 containerd[2047]: time="2025-05-09T23:59:45.311599175Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 23:59:45.314984 containerd[2047]: time="2025-05-09T23:59:45.313904255Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 23:59:45.320684 systemd[1]: Started containerd.service - containerd container runtime. May 9 23:59:45.325634 containerd[2047]: time="2025-05-09T23:59:45.324920747Z" level=info msg="containerd successfully booted in 0.223418s" May 9 23:59:45.342618 update-ssh-keys[2207]: Updated "/home/core/.ssh/authorized_keys" May 9 23:59:45.346636 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 9 23:59:45.363485 polkitd[2175]: Loading rules from directory /etc/polkit-1/rules.d May 9 23:59:45.381486 systemd[1]: Finished sshkeys.service. May 9 23:59:45.381955 polkitd[2175]: Loading rules from directory /usr/share/polkit-1/rules.d May 9 23:59:45.387363 polkitd[2175]: Finished loading, compiling and executing 2 rules May 9 23:59:45.398911 dbus-daemon[1997]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 9 23:59:45.399223 systemd[1]: Started polkit.service - Authorization Manager. May 9 23:59:45.405453 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO Agent will take identity from EC2 May 9 23:59:45.403004 polkitd[2175]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 9 23:59:45.467480 systemd-hostnamed[2067]: Hostname set to (transient) May 9 23:59:45.467698 systemd-resolved[1934]: System hostname changed to 'ip-172-31-31-45'. May 9 23:59:45.494678 sshd_keygen[2043]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 23:59:45.502818 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 23:59:45.602899 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 23:59:45.700424 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 23:59:45.703763 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 23:59:45.714086 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 23:59:45.759256 systemd[1]: issuegen.service: Deactivated successfully. May 9 23:59:45.759821 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 23:59:45.775600 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 23:59:45.806693 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 9 23:59:45.815637 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 23:59:45.831671 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 23:59:45.844445 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 23:59:45.847618 systemd[1]: Reached target getty.target - Login Prompts. May 9 23:59:45.905515 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 9 23:59:46.007104 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO [amazon-ssm-agent] Starting Core Agent May 9 23:59:46.038183 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 9 23:59:46.038370 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO [Registrar] Starting registrar module May 9 23:59:46.038490 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 9 23:59:46.038585 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO [EC2Identity] EC2 registration was successful. May 9 23:59:46.039575 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO [CredentialRefresher] credentialRefresher has started May 9 23:59:46.039704 amazon-ssm-agent[2085]: 2025-05-09 23:59:45 INFO [CredentialRefresher] Starting credentials refresher loop May 9 23:59:46.039800 amazon-ssm-agent[2085]: 2025-05-09 23:59:46 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 9 23:59:46.104683 amazon-ssm-agent[2085]: 2025-05-09 23:59:46 INFO [CredentialRefresher] Next credential rotation will be in 30.9499631992 minutes May 9 23:59:46.219236 tar[2044]: linux-arm64/LICENSE May 9 23:59:46.220305 tar[2044]: linux-arm64/README.md May 9 23:59:46.248556 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 23:59:46.980988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:59:46.984811 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 23:59:46.988180 systemd[1]: Startup finished in 9.830s (kernel) + 9.607s (userspace) = 19.437s. May 9 23:59:46.998633 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:59:47.072787 amazon-ssm-agent[2085]: 2025-05-09 23:59:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 9 23:59:47.173994 amazon-ssm-agent[2085]: 2025-05-09 23:59:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2299) started May 9 23:59:47.274988 amazon-ssm-agent[2085]: 2025-05-09 23:59:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 9 23:59:48.363336 kubelet[2293]: E0509 23:59:48.363274 2293 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:59:48.368405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:59:48.368827 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:59:50.769705 systemd-resolved[1934]: Clock change detected. Flushing caches. May 9 23:59:53.124793 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 23:59:53.130764 systemd[1]: Started sshd@0-172.31.31.45:22-147.75.109.163:42668.service - OpenSSH per-connection server daemon (147.75.109.163:42668). May 9 23:59:53.319449 sshd[2316]: Accepted publickey for core from 147.75.109.163 port 42668 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:53.322978 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:53.337960 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 23:59:53.343755 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 23:59:53.348926 systemd-logind[2020]: New session 1 of user core. May 9 23:59:53.379679 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 23:59:53.396705 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 23:59:53.406368 (systemd)[2322]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 23:59:53.633896 systemd[2322]: Queued start job for default target default.target. May 9 23:59:53.634593 systemd[2322]: Created slice app.slice - User Application Slice. May 9 23:59:53.634645 systemd[2322]: Reached target paths.target - Paths. May 9 23:59:53.634677 systemd[2322]: Reached target timers.target - Timers. May 9 23:59:53.644443 systemd[2322]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 23:59:53.657401 systemd[2322]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 23:59:53.657519 systemd[2322]: Reached target sockets.target - Sockets. May 9 23:59:53.657550 systemd[2322]: Reached target basic.target - Basic System. May 9 23:59:53.657632 systemd[2322]: Reached target default.target - Main User Target. May 9 23:59:53.657688 systemd[2322]: Startup finished in 239ms. May 9 23:59:53.658469 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 23:59:53.671071 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 23:59:53.820889 systemd[1]: Started sshd@1-172.31.31.45:22-147.75.109.163:42678.service - OpenSSH per-connection server daemon (147.75.109.163:42678). May 9 23:59:53.991995 sshd[2334]: Accepted publickey for core from 147.75.109.163 port 42678 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:53.994640 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:54.002288 systemd-logind[2020]: New session 2 of user core. May 9 23:59:54.010848 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 23:59:54.141639 sshd[2334]: pam_unix(sshd:session): session closed for user core May 9 23:59:54.148530 systemd-logind[2020]: Session 2 logged out. Waiting for processes to exit. May 9 23:59:54.149166 systemd[1]: sshd@1-172.31.31.45:22-147.75.109.163:42678.service: Deactivated successfully. May 9 23:59:54.154716 systemd[1]: session-2.scope: Deactivated successfully. May 9 23:59:54.157355 systemd-logind[2020]: Removed session 2. May 9 23:59:54.172732 systemd[1]: Started sshd@2-172.31.31.45:22-147.75.109.163:42690.service - OpenSSH per-connection server daemon (147.75.109.163:42690). May 9 23:59:54.336008 sshd[2342]: Accepted publickey for core from 147.75.109.163 port 42690 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:54.338955 sshd[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:54.347636 systemd-logind[2020]: New session 3 of user core. May 9 23:59:54.357769 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 23:59:54.478587 sshd[2342]: pam_unix(sshd:session): session closed for user core May 9 23:59:54.483983 systemd[1]: sshd@2-172.31.31.45:22-147.75.109.163:42690.service: Deactivated successfully. May 9 23:59:54.489246 systemd[1]: session-3.scope: Deactivated successfully. May 9 23:59:54.489289 systemd-logind[2020]: Session 3 logged out. Waiting for processes to exit. May 9 23:59:54.493939 systemd-logind[2020]: Removed session 3. May 9 23:59:54.508757 systemd[1]: Started sshd@3-172.31.31.45:22-147.75.109.163:42700.service - OpenSSH per-connection server daemon (147.75.109.163:42700). May 9 23:59:54.682651 sshd[2350]: Accepted publickey for core from 147.75.109.163 port 42700 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:54.685139 sshd[2350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:54.693542 systemd-logind[2020]: New session 4 of user core. May 9 23:59:54.700902 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 23:59:54.829559 sshd[2350]: pam_unix(sshd:session): session closed for user core May 9 23:59:54.835611 systemd[1]: sshd@3-172.31.31.45:22-147.75.109.163:42700.service: Deactivated successfully. May 9 23:59:54.841363 systemd[1]: session-4.scope: Deactivated successfully. May 9 23:59:54.841840 systemd-logind[2020]: Session 4 logged out. Waiting for processes to exit. May 9 23:59:54.844531 systemd-logind[2020]: Removed session 4. May 9 23:59:54.858727 systemd[1]: Started sshd@4-172.31.31.45:22-147.75.109.163:42714.service - OpenSSH per-connection server daemon (147.75.109.163:42714). May 9 23:59:55.024497 sshd[2358]: Accepted publickey for core from 147.75.109.163 port 42714 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:55.026935 sshd[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:55.034580 systemd-logind[2020]: New session 5 of user core. May 9 23:59:55.042751 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 23:59:55.158506 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 23:59:55.159155 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:59:55.176893 sudo[2362]: pam_unix(sudo:session): session closed for user root May 9 23:59:55.200065 sshd[2358]: pam_unix(sshd:session): session closed for user core May 9 23:59:55.208059 systemd[1]: sshd@4-172.31.31.45:22-147.75.109.163:42714.service: Deactivated successfully. May 9 23:59:55.213210 systemd[1]: session-5.scope: Deactivated successfully. May 9 23:59:55.213786 systemd-logind[2020]: Session 5 logged out. Waiting for processes to exit. May 9 23:59:55.217372 systemd-logind[2020]: Removed session 5. May 9 23:59:55.229747 systemd[1]: Started sshd@5-172.31.31.45:22-147.75.109.163:42722.service - OpenSSH per-connection server daemon (147.75.109.163:42722). May 9 23:59:55.400894 sshd[2367]: Accepted publickey for core from 147.75.109.163 port 42722 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:55.403994 sshd[2367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:55.411483 systemd-logind[2020]: New session 6 of user core. May 9 23:59:55.416895 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 23:59:55.522186 sudo[2372]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 23:59:55.523476 sudo[2372]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:59:55.529704 sudo[2372]: pam_unix(sudo:session): session closed for user root May 9 23:59:55.539795 sudo[2371]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 23:59:55.540531 sudo[2371]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:59:55.563750 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 23:59:55.568667 auditctl[2375]: No rules May 9 23:59:55.569498 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:59:55.570000 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 23:59:55.586927 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 23:59:55.627698 augenrules[2394]: No rules May 9 23:59:55.631130 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 23:59:55.635806 sudo[2371]: pam_unix(sudo:session): session closed for user root May 9 23:59:55.658561 sshd[2367]: pam_unix(sshd:session): session closed for user core May 9 23:59:55.664185 systemd[1]: sshd@5-172.31.31.45:22-147.75.109.163:42722.service: Deactivated successfully. May 9 23:59:55.665427 systemd-logind[2020]: Session 6 logged out. Waiting for processes to exit. May 9 23:59:55.671904 systemd[1]: session-6.scope: Deactivated successfully. May 9 23:59:55.673764 systemd-logind[2020]: Removed session 6. May 9 23:59:55.687784 systemd[1]: Started sshd@6-172.31.31.45:22-147.75.109.163:42730.service - OpenSSH per-connection server daemon (147.75.109.163:42730). May 9 23:59:55.862251 sshd[2403]: Accepted publickey for core from 147.75.109.163 port 42730 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:55.864769 sshd[2403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:55.873198 systemd-logind[2020]: New session 7 of user core. May 9 23:59:55.879734 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 23:59:55.985482 sudo[2407]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 23:59:55.986091 sudo[2407]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:59:56.564707 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 23:59:56.565173 (dockerd)[2422]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 23:59:57.046693 dockerd[2422]: time="2025-05-09T23:59:57.046066679Z" level=info msg="Starting up" May 9 23:59:57.473806 dockerd[2422]: time="2025-05-09T23:59:57.473104717Z" level=info msg="Loading containers: start." May 9 23:59:57.678326 kernel: Initializing XFRM netlink socket May 9 23:59:57.725317 (udev-worker)[2445]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:57.817089 systemd-networkd[1599]: docker0: Link UP May 9 23:59:57.841985 dockerd[2422]: time="2025-05-09T23:59:57.841750011Z" level=info msg="Loading containers: done." May 9 23:59:57.867758 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3685520681-merged.mount: Deactivated successfully. May 9 23:59:57.870821 dockerd[2422]: time="2025-05-09T23:59:57.870000579Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 23:59:57.870821 dockerd[2422]: time="2025-05-09T23:59:57.870144771Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 23:59:57.870821 dockerd[2422]: time="2025-05-09T23:59:57.870401823Z" level=info msg="Daemon has completed initialization" May 9 23:59:57.930433 dockerd[2422]: time="2025-05-09T23:59:57.930333783Z" level=info msg="API listen on /run/docker.sock" May 9 23:59:57.930676 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 23:59:58.365864 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 23:59:58.373811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:59:58.716634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:59:58.742954 (kubelet)[2576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:59:58.820320 kubelet[2576]: E0509 23:59:58.820200 2576 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:59:58.828497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:59:58.828916 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:59:59.195523 containerd[2047]: time="2025-05-09T23:59:59.195374533Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 9 23:59:59.902946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424306519.mount: Deactivated successfully. May 10 00:00:01.414087 containerd[2047]: time="2025-05-10T00:00:01.413858884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:01.416011 containerd[2047]: time="2025-05-10T00:00:01.415956964Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" May 10 00:00:01.416964 containerd[2047]: time="2025-05-10T00:00:01.416447344Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:01.422641 containerd[2047]: time="2025-05-10T00:00:01.422541964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:01.425104 containerd[2047]: time="2025-05-10T00:00:01.424816216Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.229377903s" May 10 00:00:01.425104 containerd[2047]: time="2025-05-10T00:00:01.424877992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 10 00:00:01.462159 containerd[2047]: time="2025-05-10T00:00:01.462089525Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 10 00:00:03.054787 containerd[2047]: time="2025-05-10T00:00:03.054722573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:03.057568 containerd[2047]: time="2025-05-10T00:00:03.057479249Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" May 10 00:00:03.058671 containerd[2047]: time="2025-05-10T00:00:03.058619393Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:03.066494 containerd[2047]: time="2025-05-10T00:00:03.066398201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:03.069206 containerd[2047]: time="2025-05-10T00:00:03.068657309Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.606506188s" May 10 00:00:03.069206 containerd[2047]: time="2025-05-10T00:00:03.068738465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 10 00:00:03.107077 containerd[2047]: time="2025-05-10T00:00:03.106803257Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 10 00:00:04.206714 containerd[2047]: time="2025-05-10T00:00:04.206651658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:04.208513 containerd[2047]: time="2025-05-10T00:00:04.208328742Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" May 10 00:00:04.209647 containerd[2047]: time="2025-05-10T00:00:04.209562426Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:04.215342 containerd[2047]: time="2025-05-10T00:00:04.215214270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:04.218207 containerd[2047]: time="2025-05-10T00:00:04.217700718Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.110840533s" May 10 00:00:04.218207 containerd[2047]: time="2025-05-10T00:00:04.217755834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 10 00:00:04.254454 containerd[2047]: time="2025-05-10T00:00:04.254134194Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 00:00:04.876010 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 10 00:00:04.901814 systemd[1]: logrotate.service: Deactivated successfully. May 10 00:00:05.495237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount8388484.mount: Deactivated successfully. May 10 00:00:06.070338 containerd[2047]: time="2025-05-10T00:00:06.070133215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:06.072069 containerd[2047]: time="2025-05-10T00:00:06.071767508Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" May 10 00:00:06.073406 containerd[2047]: time="2025-05-10T00:00:06.073313372Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:06.077203 containerd[2047]: time="2025-05-10T00:00:06.077127356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:06.078514 containerd[2047]: time="2025-05-10T00:00:06.078456920Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.824263002s" May 10 00:00:06.078514 containerd[2047]: time="2025-05-10T00:00:06.078514232Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 10 00:00:06.115245 containerd[2047]: time="2025-05-10T00:00:06.115170428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:00:06.682339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441044879.mount: Deactivated successfully. May 10 00:00:07.855153 containerd[2047]: time="2025-05-10T00:00:07.855095376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:07.858000 containerd[2047]: time="2025-05-10T00:00:07.857940852Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 10 00:00:07.860212 containerd[2047]: time="2025-05-10T00:00:07.860124696Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:07.868288 containerd[2047]: time="2025-05-10T00:00:07.868199916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:07.870948 containerd[2047]: time="2025-05-10T00:00:07.870767532Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.755528824s" May 10 00:00:07.870948 containerd[2047]: time="2025-05-10T00:00:07.870831480Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 10 00:00:07.908281 containerd[2047]: time="2025-05-10T00:00:07.908073781Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 10 00:00:08.452886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount668921520.mount: Deactivated successfully. May 10 00:00:08.467332 containerd[2047]: time="2025-05-10T00:00:08.466747751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:08.468751 containerd[2047]: time="2025-05-10T00:00:08.468685907Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" May 10 00:00:08.471306 containerd[2047]: time="2025-05-10T00:00:08.471180803Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:08.476386 containerd[2047]: time="2025-05-10T00:00:08.476296499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:08.478179 containerd[2047]: time="2025-05-10T00:00:08.477976439Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 569.836034ms" May 10 00:00:08.478179 containerd[2047]: time="2025-05-10T00:00:08.478033259Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 10 00:00:08.516560 containerd[2047]: time="2025-05-10T00:00:08.516479412Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 10 00:00:09.081701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:00:09.089832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:09.128771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775771214.mount: Deactivated successfully. May 10 00:00:09.467970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:09.477934 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:00:09.647876 kubelet[2762]: E0510 00:00:09.647813 2762 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:00:09.656884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:00:09.660223 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:00:12.507863 containerd[2047]: time="2025-05-10T00:00:12.507782379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:12.510630 containerd[2047]: time="2025-05-10T00:00:12.510564411Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" May 10 00:00:12.513219 containerd[2047]: time="2025-05-10T00:00:12.513149764Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:12.519553 containerd[2047]: time="2025-05-10T00:00:12.519457156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:12.522014 containerd[2047]: time="2025-05-10T00:00:12.521957872Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.00540406s" May 10 00:00:12.522476 containerd[2047]: time="2025-05-10T00:00:12.522169612Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 10 00:00:15.250382 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 10 00:00:18.266007 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:18.276743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:18.329378 systemd[1]: Reloading requested from client PID 2873 ('systemctl') (unit session-7.scope)... May 10 00:00:18.329415 systemd[1]: Reloading... May 10 00:00:18.517306 zram_generator::config[2913]: No configuration found. May 10 00:00:18.794885 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:00:18.964576 systemd[1]: Reloading finished in 634 ms. May 10 00:00:19.057547 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 00:00:19.057769 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 00:00:19.058487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:19.066818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:19.354635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:19.371986 (kubelet)[2988]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:00:19.445589 kubelet[2988]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:00:19.445589 kubelet[2988]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:00:19.445589 kubelet[2988]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:00:19.447491 kubelet[2988]: I0510 00:00:19.447414 2988 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:00:20.721460 kubelet[2988]: I0510 00:00:20.721404 2988 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:00:20.721460 kubelet[2988]: I0510 00:00:20.721456 2988 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:00:20.722114 kubelet[2988]: I0510 00:00:20.721784 2988 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:00:20.749092 kubelet[2988]: E0510 00:00:20.749054 2988 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.31.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:20.749671 kubelet[2988]: I0510 00:00:20.749476 2988 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:00:20.762922 kubelet[2988]: I0510 00:00:20.762845 2988 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:00:20.763749 kubelet[2988]: I0510 00:00:20.763680 2988 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:00:20.764032 kubelet[2988]: I0510 00:00:20.763741 2988 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-45","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:00:20.764207 kubelet[2988]: I0510 00:00:20.764067 2988 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:00:20.764207 kubelet[2988]: I0510 00:00:20.764087 2988 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:00:20.764377 kubelet[2988]: I0510 00:00:20.764343 2988 state_mem.go:36] "Initialized new in-memory state store" May 10 00:00:20.767413 kubelet[2988]: I0510 00:00:20.765899 2988 kubelet.go:400] "Attempting to sync node with API server" May 10 00:00:20.767413 kubelet[2988]: I0510 00:00:20.765945 2988 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:00:20.767413 kubelet[2988]: I0510 00:00:20.766061 2988 kubelet.go:312] "Adding apiserver pod source" May 10 00:00:20.767413 kubelet[2988]: I0510 00:00:20.766109 2988 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:00:20.767413 kubelet[2988]: W0510 00:00:20.766615 2988 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-45&limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:20.767413 kubelet[2988]: E0510 00:00:20.766705 2988 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-45&limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:20.767413 kubelet[2988]: W0510 00:00:20.767254 2988 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:20.767413 kubelet[2988]: E0510 00:00:20.767350 2988 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:20.770314 kubelet[2988]: I0510 00:00:20.768560 2988 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:00:20.770314 kubelet[2988]: I0510 00:00:20.768916 2988 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:00:20.770314 kubelet[2988]: W0510 00:00:20.768990 2988 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:00:20.770314 kubelet[2988]: I0510 00:00:20.770041 2988 server.go:1264] "Started kubelet" May 10 00:00:20.783118 kubelet[2988]: E0510 00:00:20.782882 2988 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.45:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-45.183e0160cf2331f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-45,UID:ip-172-31-31-45,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-45,},FirstTimestamp:2025-05-10 00:00:20.770009593 +0000 UTC m=+1.391772536,LastTimestamp:2025-05-10 00:00:20.770009593 +0000 UTC m=+1.391772536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-45,}" May 10 00:00:20.785017 kubelet[2988]: I0510 00:00:20.784928 2988 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:00:20.787829 kubelet[2988]: I0510 00:00:20.787742 2988 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:00:20.788505 kubelet[2988]: I0510 00:00:20.788459 2988 server.go:455] "Adding debug handlers to kubelet server" May 10 00:00:20.788678 kubelet[2988]: I0510 00:00:20.788657 2988 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:00:20.794679 kubelet[2988]: I0510 00:00:20.794610 2988 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:00:20.801541 kubelet[2988]: I0510 00:00:20.801494 2988 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:00:20.803772 kubelet[2988]: W0510 00:00:20.803669 2988 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:20.804008 kubelet[2988]: E0510 00:00:20.803985 2988 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:20.804513 kubelet[2988]: E0510 00:00:20.804479 2988 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:00:20.804740 kubelet[2988]: I0510 00:00:20.804628 2988 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:00:20.806000 kubelet[2988]: I0510 00:00:20.805902 2988 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:00:20.806719 kubelet[2988]: E0510 00:00:20.806634 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-45?timeout=10s\": dial tcp 172.31.31.45:6443: connect: connection refused" interval="200ms" May 10 00:00:20.808518 kubelet[2988]: I0510 00:00:20.808335 2988 reconciler.go:26] "Reconciler: start to sync state" May 10 00:00:20.809666 kubelet[2988]: I0510 00:00:20.809548 2988 factory.go:221] Registration of the containerd container factory successfully May 10 00:00:20.809666 kubelet[2988]: I0510 00:00:20.809599 2988 factory.go:221] Registration of the systemd container factory successfully May 10 00:00:20.855449 kubelet[2988]: I0510 00:00:20.855038 2988 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:00:20.855449 kubelet[2988]: I0510 00:00:20.855066 2988 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:00:20.855449 kubelet[2988]: I0510 00:00:20.855096 2988 state_mem.go:36] "Initialized new in-memory state store" May 10 00:00:20.856925 kubelet[2988]: I0510 00:00:20.856675 2988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:00:20.859920 kubelet[2988]: I0510 00:00:20.859655 2988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:00:20.859920 kubelet[2988]: I0510 00:00:20.859755 2988 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:00:20.861774 kubelet[2988]: I0510 00:00:20.860556 2988 policy_none.go:49] "None policy: Start" May 10 00:00:20.861966 kubelet[2988]: I0510 00:00:20.861922 2988 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:00:20.862034 kubelet[2988]: I0510 00:00:20.861994 2988 state_mem.go:35] "Initializing new in-memory state store" May 10 00:00:20.862761 kubelet[2988]: I0510 00:00:20.861942 2988 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:00:20.863356 kubelet[2988]: E0510 00:00:20.863243 2988 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:00:20.864023 kubelet[2988]: W0510 00:00:20.863982 2988 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:20.864166 kubelet[2988]: E0510 00:00:20.864146 2988 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:20.876252 kubelet[2988]: I0510 00:00:20.876214 2988 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:00:20.876801 kubelet[2988]: I0510 00:00:20.876734 2988 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:00:20.877069 kubelet[2988]: I0510 00:00:20.877047 2988 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:00:20.882114 kubelet[2988]: E0510 00:00:20.882054 2988 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-45\" not found" May 10 00:00:20.903777 kubelet[2988]: I0510 00:00:20.903741 2988 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-45" May 10 00:00:20.904429 kubelet[2988]: E0510 00:00:20.904365 2988 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.45:6443/api/v1/nodes\": dial tcp 172.31.31.45:6443: connect: connection refused" node="ip-172-31-31-45" May 10 00:00:20.963868 kubelet[2988]: I0510 00:00:20.963806 2988 topology_manager.go:215] "Topology Admit Handler" podUID="e75d6a639253a4883234b4bb6c4a4e23" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-31-45" May 10 00:00:20.966500 kubelet[2988]: I0510 00:00:20.966451 2988 topology_manager.go:215] "Topology Admit Handler" podUID="7c86769cc88450ec952d27d2afaf0622" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-31-45" May 10 00:00:20.969931 kubelet[2988]: I0510 00:00:20.969594 2988 topology_manager.go:215] "Topology Admit Handler" podUID="f97dc5f39f2193e1aee4e2da6c235d84" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-31-45" May 10 00:00:21.007756 kubelet[2988]: E0510 00:00:21.007616 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-45?timeout=10s\": dial tcp 172.31.31.45:6443: connect: connection refused" interval="400ms" May 10 00:00:21.008828 kubelet[2988]: I0510 00:00:21.008736 2988 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f97dc5f39f2193e1aee4e2da6c235d84-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-45\" (UID: \"f97dc5f39f2193e1aee4e2da6c235d84\") " pod="kube-system/kube-scheduler-ip-172-31-31-45" May 10 00:00:21.009585 kubelet[2988]: I0510 00:00:21.009115 2988 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e75d6a639253a4883234b4bb6c4a4e23-ca-certs\") pod \"kube-apiserver-ip-172-31-31-45\" (UID: \"e75d6a639253a4883234b4bb6c4a4e23\") " pod="kube-system/kube-apiserver-ip-172-31-31-45" May 10 00:00:21.009585 kubelet[2988]: I0510 00:00:21.009159 2988 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e75d6a639253a4883234b4bb6c4a4e23-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-45\" (UID: \"e75d6a639253a4883234b4bb6c4a4e23\") " pod="kube-system/kube-apiserver-ip-172-31-31-45" May 10 00:00:21.009585 kubelet[2988]: I0510 00:00:21.009199 2988 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c86769cc88450ec952d27d2afaf0622-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"7c86769cc88450ec952d27d2afaf0622\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" May 10 00:00:21.009585 kubelet[2988]: I0510 00:00:21.009234 2988 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c86769cc88450ec952d27d2afaf0622-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"7c86769cc88450ec952d27d2afaf0622\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" May 10 00:00:21.009585 kubelet[2988]: I0510 00:00:21.009297 2988 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c86769cc88450ec952d27d2afaf0622-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"7c86769cc88450ec952d27d2afaf0622\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" May 10 00:00:21.010095 kubelet[2988]: I0510 00:00:21.009339 2988 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e75d6a639253a4883234b4bb6c4a4e23-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-45\" (UID: \"e75d6a639253a4883234b4bb6c4a4e23\") " pod="kube-system/kube-apiserver-ip-172-31-31-45" May 10 00:00:21.010095 kubelet[2988]: I0510 00:00:21.009379 2988 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c86769cc88450ec952d27d2afaf0622-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"7c86769cc88450ec952d27d2afaf0622\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" May 10 00:00:21.010095 kubelet[2988]: I0510 00:00:21.009413 2988 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c86769cc88450ec952d27d2afaf0622-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"7c86769cc88450ec952d27d2afaf0622\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" May 10 00:00:21.107026 kubelet[2988]: I0510 00:00:21.106971 2988 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-45" May 10 00:00:21.107593 kubelet[2988]: E0510 00:00:21.107445 2988 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.45:6443/api/v1/nodes\": dial tcp 172.31.31.45:6443: connect: connection refused" node="ip-172-31-31-45" May 10 00:00:21.276305 containerd[2047]: time="2025-05-10T00:00:21.275998055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-45,Uid:e75d6a639253a4883234b4bb6c4a4e23,Namespace:kube-system,Attempt:0,}" May 10 00:00:21.283685 containerd[2047]: time="2025-05-10T00:00:21.283521203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-45,Uid:7c86769cc88450ec952d27d2afaf0622,Namespace:kube-system,Attempt:0,}" May 10 00:00:21.289077 containerd[2047]: time="2025-05-10T00:00:21.288100343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-45,Uid:f97dc5f39f2193e1aee4e2da6c235d84,Namespace:kube-system,Attempt:0,}" May 10 00:00:21.408500 kubelet[2988]: E0510 00:00:21.408425 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-45?timeout=10s\": dial tcp 172.31.31.45:6443: connect: connection refused" interval="800ms" May 10 00:00:21.510184 kubelet[2988]: I0510 00:00:21.509811 2988 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-45" May 10 00:00:21.510385 kubelet[2988]: E0510 00:00:21.510297 2988 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.45:6443/api/v1/nodes\": dial tcp 172.31.31.45:6443: connect: connection refused" node="ip-172-31-31-45" May 10 00:00:21.822894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount563115120.mount: Deactivated successfully. May 10 00:00:21.840434 containerd[2047]: time="2025-05-10T00:00:21.840341258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:00:21.842583 containerd[2047]: time="2025-05-10T00:00:21.842517362Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:00:21.844629 containerd[2047]: time="2025-05-10T00:00:21.844532606Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 10 00:00:21.846536 containerd[2047]: time="2025-05-10T00:00:21.846488198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:00:21.848705 containerd[2047]: time="2025-05-10T00:00:21.848656562Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:00:21.851732 containerd[2047]: time="2025-05-10T00:00:21.851523758Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:00:21.853117 containerd[2047]: time="2025-05-10T00:00:21.853017026Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:00:21.863048 containerd[2047]: time="2025-05-10T00:00:21.862629314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:00:21.868032 containerd[2047]: time="2025-05-10T00:00:21.867980474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 584.348199ms" May 10 00:00:21.872124 containerd[2047]: time="2025-05-10T00:00:21.872049266Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 595.940499ms" May 10 00:00:21.877104 containerd[2047]: time="2025-05-10T00:00:21.877050650Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 588.812379ms" May 10 00:00:22.067889 kubelet[2988]: W0510 00:00:22.067419 2988 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:22.067889 kubelet[2988]: E0510 00:00:22.067516 2988 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:22.080923 containerd[2047]: time="2025-05-10T00:00:22.080537735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:22.080923 containerd[2047]: time="2025-05-10T00:00:22.080659619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:22.080923 containerd[2047]: time="2025-05-10T00:00:22.080697419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:22.082304 containerd[2047]: time="2025-05-10T00:00:22.081118235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:22.090186 containerd[2047]: time="2025-05-10T00:00:22.089612387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:22.090186 containerd[2047]: time="2025-05-10T00:00:22.089729891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:22.090186 containerd[2047]: time="2025-05-10T00:00:22.089768015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:22.091040 containerd[2047]: time="2025-05-10T00:00:22.090824399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:22.092848 containerd[2047]: time="2025-05-10T00:00:22.087859775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:22.092848 containerd[2047]: time="2025-05-10T00:00:22.092474927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:22.092848 containerd[2047]: time="2025-05-10T00:00:22.092535479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:22.094751 containerd[2047]: time="2025-05-10T00:00:22.093212159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:22.105756 kubelet[2988]: W0510 00:00:22.105697 2988 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:22.105898 kubelet[2988]: E0510 00:00:22.105773 2988 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:22.210308 kubelet[2988]: E0510 00:00:22.209228 2988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-45?timeout=10s\": dial tcp 172.31.31.45:6443: connect: connection refused" interval="1.6s" May 10 00:00:22.254714 containerd[2047]: time="2025-05-10T00:00:22.254646624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-45,Uid:7c86769cc88450ec952d27d2afaf0622,Namespace:kube-system,Attempt:0,} returns sandbox id \"5945d599b9ad4376008fbb7704f6af4c02d5f1b75d800943c8ec53b02ad8b7ea\"" May 10 00:00:22.262870 containerd[2047]: time="2025-05-10T00:00:22.262819008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-45,Uid:e75d6a639253a4883234b4bb6c4a4e23,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dab16f38768979493dea825603c3e2b991b40bce285525b67ce0b68d2a88c26\"" May 10 00:00:22.268899 containerd[2047]: time="2025-05-10T00:00:22.268847856Z" level=info msg="CreateContainer within sandbox \"5945d599b9ad4376008fbb7704f6af4c02d5f1b75d800943c8ec53b02ad8b7ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:00:22.270957 containerd[2047]: time="2025-05-10T00:00:22.270897408Z" level=info msg="CreateContainer within sandbox \"8dab16f38768979493dea825603c3e2b991b40bce285525b67ce0b68d2a88c26\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:00:22.276405 containerd[2047]: time="2025-05-10T00:00:22.276329496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-45,Uid:f97dc5f39f2193e1aee4e2da6c235d84,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4cd365e65baca5a4f11539fa157427aef7cea47d741ac310b5edfa158a87fae\"" May 10 00:00:22.281690 containerd[2047]: time="2025-05-10T00:00:22.281630100Z" level=info msg="CreateContainer within sandbox \"f4cd365e65baca5a4f11539fa157427aef7cea47d741ac310b5edfa158a87fae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:00:22.313205 kubelet[2988]: I0510 00:00:22.313144 2988 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-45" May 10 00:00:22.313739 kubelet[2988]: E0510 00:00:22.313684 2988 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.45:6443/api/v1/nodes\": dial tcp 172.31.31.45:6443: connect: connection refused" node="ip-172-31-31-45" May 10 00:00:22.314605 containerd[2047]: time="2025-05-10T00:00:22.314299572Z" level=info msg="CreateContainer within sandbox \"8dab16f38768979493dea825603c3e2b991b40bce285525b67ce0b68d2a88c26\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"beb0ab704e8547d68f365d7f1a11fd61e1753266e1d9903e8963af23a259aa30\"" May 10 00:00:22.315182 containerd[2047]: time="2025-05-10T00:00:22.315140736Z" level=info msg="StartContainer for \"beb0ab704e8547d68f365d7f1a11fd61e1753266e1d9903e8963af23a259aa30\"" May 10 00:00:22.324930 containerd[2047]: time="2025-05-10T00:00:22.324431352Z" level=info msg="CreateContainer within sandbox \"5945d599b9ad4376008fbb7704f6af4c02d5f1b75d800943c8ec53b02ad8b7ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5c5292370dd5759ebd601675ba97d590962cd60633bc1566033dc82759dd97e5\"" May 10 00:00:22.325411 containerd[2047]: time="2025-05-10T00:00:22.325354656Z" level=info msg="StartContainer for \"5c5292370dd5759ebd601675ba97d590962cd60633bc1566033dc82759dd97e5\"" May 10 00:00:22.331596 containerd[2047]: time="2025-05-10T00:00:22.331432140Z" level=info msg="CreateContainer within sandbox \"f4cd365e65baca5a4f11539fa157427aef7cea47d741ac310b5edfa158a87fae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"088f80f69c235a7b01c214878872cd0d4b7444e2f9eba3d181ca59fe2a78febc\"" May 10 00:00:22.334168 containerd[2047]: time="2025-05-10T00:00:22.333776112Z" level=info msg="StartContainer for \"088f80f69c235a7b01c214878872cd0d4b7444e2f9eba3d181ca59fe2a78febc\"" May 10 00:00:22.362159 kubelet[2988]: W0510 00:00:22.362058 2988 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-45&limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:22.362412 kubelet[2988]: E0510 00:00:22.362386 2988 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-45&limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:22.455558 kubelet[2988]: W0510 00:00:22.455324 2988 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:22.455558 kubelet[2988]: E0510 00:00:22.455418 2988 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.45:6443: connect: connection refused May 10 00:00:22.494411 containerd[2047]: time="2025-05-10T00:00:22.493944817Z" level=info msg="StartContainer for \"beb0ab704e8547d68f365d7f1a11fd61e1753266e1d9903e8963af23a259aa30\" returns successfully" May 10 00:00:22.556306 containerd[2047]: time="2025-05-10T00:00:22.554979445Z" level=info msg="StartContainer for \"088f80f69c235a7b01c214878872cd0d4b7444e2f9eba3d181ca59fe2a78febc\" returns successfully" May 10 00:00:22.556812 containerd[2047]: time="2025-05-10T00:00:22.555163129Z" level=info msg="StartContainer for \"5c5292370dd5759ebd601675ba97d590962cd60633bc1566033dc82759dd97e5\" returns successfully" May 10 00:00:23.920999 kubelet[2988]: I0510 00:00:23.918939 2988 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-45" May 10 00:00:25.318292 kubelet[2988]: E0510 00:00:25.316878 2988 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-45\" not found" node="ip-172-31-31-45" May 10 00:00:25.397707 kubelet[2988]: I0510 00:00:25.396688 2988 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-31-45" May 10 00:00:25.501741 kubelet[2988]: E0510 00:00:25.501588 2988 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-45.183e0160cf2331f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-45,UID:ip-172-31-31-45,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-45,},FirstTimestamp:2025-05-10 00:00:20.770009593 +0000 UTC m=+1.391772536,LastTimestamp:2025-05-10 00:00:20.770009593 +0000 UTC m=+1.391772536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-45,}" May 10 00:00:25.770745 kubelet[2988]: I0510 00:00:25.770683 2988 apiserver.go:52] "Watching apiserver" May 10 00:00:25.805641 kubelet[2988]: I0510 00:00:25.805582 2988 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:00:27.328917 systemd[1]: Reloading requested from client PID 3265 ('systemctl') (unit session-7.scope)... May 10 00:00:27.328956 systemd[1]: Reloading... May 10 00:00:27.521336 zram_generator::config[3314]: No configuration found. May 10 00:00:27.745832 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:00:27.935660 systemd[1]: Reloading finished in 606 ms. May 10 00:00:28.002332 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:28.018851 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:00:28.020412 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:28.031075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:00:28.332697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:00:28.352067 (kubelet)[3375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:00:28.459933 kubelet[3375]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:00:28.459933 kubelet[3375]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:00:28.459933 kubelet[3375]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:00:28.461637 kubelet[3375]: I0510 00:00:28.460475 3375 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:00:28.471854 kubelet[3375]: I0510 00:00:28.471808 3375 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:00:28.473414 kubelet[3375]: I0510 00:00:28.473376 3375 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:00:28.474116 kubelet[3375]: I0510 00:00:28.474093 3375 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:00:28.481314 kubelet[3375]: I0510 00:00:28.481006 3375 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:00:28.483725 kubelet[3375]: I0510 00:00:28.483687 3375 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:00:28.495066 kubelet[3375]: I0510 00:00:28.495032 3375 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:00:28.496734 kubelet[3375]: I0510 00:00:28.496092 3375 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:00:28.496734 kubelet[3375]: I0510 00:00:28.496152 3375 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-45","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:00:28.496734 kubelet[3375]: I0510 00:00:28.496488 3375 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:00:28.496734 kubelet[3375]: I0510 00:00:28.496508 3375 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:00:28.496734 kubelet[3375]: I0510 00:00:28.496566 3375 state_mem.go:36] "Initialized new in-memory state store" May 10 00:00:28.497225 kubelet[3375]: I0510 00:00:28.497202 3375 kubelet.go:400] "Attempting to sync node with API server" May 10 00:00:28.497384 kubelet[3375]: I0510 00:00:28.497364 3375 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:00:28.497640 kubelet[3375]: I0510 00:00:28.497506 3375 kubelet.go:312] "Adding apiserver pod source" May 10 00:00:28.497640 kubelet[3375]: I0510 00:00:28.497549 3375 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:00:28.503304 kubelet[3375]: I0510 00:00:28.501784 3375 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:00:28.503304 kubelet[3375]: I0510 00:00:28.502130 3375 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:00:28.503304 kubelet[3375]: I0510 00:00:28.502813 3375 server.go:1264] "Started kubelet" May 10 00:00:28.505987 kubelet[3375]: I0510 00:00:28.505954 3375 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:00:28.520643 kubelet[3375]: I0510 00:00:28.520589 3375 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:00:28.542207 kubelet[3375]: I0510 00:00:28.523093 3375 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:00:28.542449 kubelet[3375]: I0510 00:00:28.527096 3375 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:00:28.559561 kubelet[3375]: I0510 00:00:28.527118 3375 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:00:28.560085 kubelet[3375]: I0510 00:00:28.560063 3375 reconciler.go:26] "Reconciler: start to sync state" May 10 00:00:28.560632 kubelet[3375]: I0510 00:00:28.560573 3375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:00:28.562311 kubelet[3375]: I0510 00:00:28.543166 3375 server.go:455] "Adding debug handlers to kubelet server" May 10 00:00:28.566101 kubelet[3375]: I0510 00:00:28.543030 3375 factory.go:221] Registration of the systemd container factory successfully May 10 00:00:28.566101 kubelet[3375]: I0510 00:00:28.565841 3375 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:00:28.568435 kubelet[3375]: I0510 00:00:28.567880 3375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:00:28.568435 kubelet[3375]: I0510 00:00:28.567944 3375 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:00:28.568435 kubelet[3375]: I0510 00:00:28.567974 3375 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:00:28.568435 kubelet[3375]: E0510 00:00:28.568045 3375 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:00:28.590840 kubelet[3375]: I0510 00:00:28.547573 3375 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:00:28.631307 kubelet[3375]: I0510 00:00:28.630824 3375 factory.go:221] Registration of the containerd container factory successfully May 10 00:00:28.641418 kubelet[3375]: E0510 00:00:28.641357 3375 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" May 10 00:00:28.651773 kubelet[3375]: E0510 00:00:28.650233 3375 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:00:28.654893 kubelet[3375]: I0510 00:00:28.654399 3375 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-45" May 10 00:00:28.670162 kubelet[3375]: E0510 00:00:28.668369 3375 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 10 00:00:28.678109 kubelet[3375]: I0510 00:00:28.677046 3375 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-31-45" May 10 00:00:28.678109 kubelet[3375]: I0510 00:00:28.677322 3375 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-31-45" May 10 00:00:28.779255 kubelet[3375]: I0510 00:00:28.779207 3375 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:00:28.779255 kubelet[3375]: I0510 00:00:28.779242 3375 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:00:28.779759 kubelet[3375]: I0510 00:00:28.779704 3375 state_mem.go:36] "Initialized new in-memory state store" May 10 00:00:28.780124 kubelet[3375]: I0510 00:00:28.780058 3375 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:00:28.780205 kubelet[3375]: I0510 00:00:28.780111 3375 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:00:28.780205 kubelet[3375]: I0510 00:00:28.780152 3375 policy_none.go:49] "None policy: Start" May 10 00:00:28.782744 kubelet[3375]: I0510 00:00:28.782533 3375 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:00:28.782892 kubelet[3375]: I0510 00:00:28.782754 3375 state_mem.go:35] "Initializing new in-memory state store" May 10 00:00:28.784433 kubelet[3375]: I0510 00:00:28.783255 3375 state_mem.go:75] "Updated machine memory state" May 10 00:00:28.786915 kubelet[3375]: I0510 00:00:28.786870 3375 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:00:28.787196 kubelet[3375]: I0510 00:00:28.787138 3375 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:00:28.794775 kubelet[3375]: I0510 00:00:28.794502 3375 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:00:28.868861 kubelet[3375]: I0510 00:00:28.868703 3375 topology_manager.go:215] "Topology Admit Handler" podUID="e75d6a639253a4883234b4bb6c4a4e23" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-31-45" May 10 00:00:28.868984 kubelet[3375]: I0510 00:00:28.868881 3375 topology_manager.go:215] "Topology Admit Handler" podUID="7c86769cc88450ec952d27d2afaf0622" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-31-45" May 10 00:00:28.872575 kubelet[3375]: I0510 00:00:28.872020 3375 topology_manager.go:215] "Topology Admit Handler" podUID="f97dc5f39f2193e1aee4e2da6c235d84" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-31-45" May 10 00:00:28.879617 kubelet[3375]: E0510 00:00:28.879502 3375 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-45\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-45" May 10 00:00:28.881883 kubelet[3375]: E0510 00:00:28.881727 3375 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-31-45\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-45" May 10 00:00:28.969365 kubelet[3375]: I0510 00:00:28.968458 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c86769cc88450ec952d27d2afaf0622-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"7c86769cc88450ec952d27d2afaf0622\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" May 10 00:00:28.969365 kubelet[3375]: I0510 00:00:28.968527 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c86769cc88450ec952d27d2afaf0622-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"7c86769cc88450ec952d27d2afaf0622\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" May 10 00:00:28.969365 kubelet[3375]: I0510 00:00:28.968574 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c86769cc88450ec952d27d2afaf0622-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"7c86769cc88450ec952d27d2afaf0622\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" May 10 00:00:28.969365 kubelet[3375]: I0510 00:00:28.968612 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e75d6a639253a4883234b4bb6c4a4e23-ca-certs\") pod \"kube-apiserver-ip-172-31-31-45\" (UID: \"e75d6a639253a4883234b4bb6c4a4e23\") " pod="kube-system/kube-apiserver-ip-172-31-31-45" May 10 00:00:28.969365 kubelet[3375]: I0510 00:00:28.968652 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e75d6a639253a4883234b4bb6c4a4e23-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-45\" (UID: \"e75d6a639253a4883234b4bb6c4a4e23\") " pod="kube-system/kube-apiserver-ip-172-31-31-45" May 10 00:00:28.969736 kubelet[3375]: I0510 00:00:28.968687 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c86769cc88450ec952d27d2afaf0622-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"7c86769cc88450ec952d27d2afaf0622\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" May 10 00:00:28.969736 kubelet[3375]: I0510 00:00:28.968723 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c86769cc88450ec952d27d2afaf0622-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"7c86769cc88450ec952d27d2afaf0622\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" May 10 00:00:28.969736 kubelet[3375]: I0510 00:00:28.968758 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f97dc5f39f2193e1aee4e2da6c235d84-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-45\" (UID: \"f97dc5f39f2193e1aee4e2da6c235d84\") " pod="kube-system/kube-scheduler-ip-172-31-31-45" May 10 00:00:28.969736 kubelet[3375]: I0510 00:00:28.968794 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e75d6a639253a4883234b4bb6c4a4e23-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-45\" (UID: \"e75d6a639253a4883234b4bb6c4a4e23\") " pod="kube-system/kube-apiserver-ip-172-31-31-45" May 10 00:00:29.311216 update_engine[2025]: I20250510 00:00:29.311120 2025 update_attempter.cc:509] Updating boot flags... May 10 00:00:29.451764 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3426) May 10 00:00:29.499023 kubelet[3375]: I0510 00:00:29.498290 3375 apiserver.go:52] "Watching apiserver" May 10 00:00:29.561760 kubelet[3375]: I0510 00:00:29.560431 3375 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:00:29.752112 kubelet[3375]: E0510 00:00:29.752036 3375 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-45\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-45" May 10 00:00:29.779451 kubelet[3375]: E0510 00:00:29.777867 3375 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-31-45\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-45" May 10 00:00:30.050284 kubelet[3375]: I0510 00:00:30.036729 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-45" podStartSLOduration=2.036709735 podStartE2EDuration="2.036709735s" podCreationTimestamp="2025-05-10 00:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:00:29.95792437 +0000 UTC m=+1.599557373" watchObservedRunningTime="2025-05-10 00:00:30.036709735 +0000 UTC m=+1.678342738" May 10 00:00:30.110315 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3426) May 10 00:00:30.118362 kubelet[3375]: I0510 00:00:30.118028 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-45" podStartSLOduration=3.1168198990000002 podStartE2EDuration="3.116819899s" podCreationTimestamp="2025-05-10 00:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:00:30.043589179 +0000 UTC m=+1.685222170" watchObservedRunningTime="2025-05-10 00:00:30.116819899 +0000 UTC m=+1.758452902" May 10 00:00:30.162726 kubelet[3375]: I0510 00:00:30.160979 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-45" podStartSLOduration=3.160956319 podStartE2EDuration="3.160956319s" podCreationTimestamp="2025-05-10 00:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:00:30.120930319 +0000 UTC m=+1.762563322" watchObservedRunningTime="2025-05-10 00:00:30.160956319 +0000 UTC m=+1.802589334" May 10 00:00:30.907554 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3426) May 10 00:00:34.339829 sudo[2407]: pam_unix(sudo:session): session closed for user root May 10 00:00:34.363598 sshd[2403]: pam_unix(sshd:session): session closed for user core May 10 00:00:34.369431 systemd-logind[2020]: Session 7 logged out. Waiting for processes to exit. May 10 00:00:34.369799 systemd[1]: sshd@6-172.31.31.45:22-147.75.109.163:42730.service: Deactivated successfully. May 10 00:00:34.377757 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:00:34.381028 systemd-logind[2020]: Removed session 7. May 10 00:00:43.385258 kubelet[3375]: I0510 00:00:43.384944 3375 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:00:43.386037 containerd[2047]: time="2025-05-10T00:00:43.385642845Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:00:43.388608 kubelet[3375]: I0510 00:00:43.386112 3375 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:00:44.311317 kubelet[3375]: I0510 00:00:44.310241 3375 topology_manager.go:215] "Topology Admit Handler" podUID="2588d50e-17f5-4afc-bd91-334e12885edf" podNamespace="kube-system" podName="kube-proxy-bfhtp" May 10 00:00:44.395149 kubelet[3375]: I0510 00:00:44.395087 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2588d50e-17f5-4afc-bd91-334e12885edf-xtables-lock\") pod \"kube-proxy-bfhtp\" (UID: \"2588d50e-17f5-4afc-bd91-334e12885edf\") " pod="kube-system/kube-proxy-bfhtp" May 10 00:00:44.395775 kubelet[3375]: I0510 00:00:44.395161 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q4tn\" (UniqueName: \"kubernetes.io/projected/2588d50e-17f5-4afc-bd91-334e12885edf-kube-api-access-5q4tn\") pod \"kube-proxy-bfhtp\" (UID: \"2588d50e-17f5-4afc-bd91-334e12885edf\") " pod="kube-system/kube-proxy-bfhtp" May 10 00:00:44.395775 kubelet[3375]: I0510 00:00:44.395208 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2588d50e-17f5-4afc-bd91-334e12885edf-kube-proxy\") pod \"kube-proxy-bfhtp\" (UID: \"2588d50e-17f5-4afc-bd91-334e12885edf\") " pod="kube-system/kube-proxy-bfhtp" May 10 00:00:44.395775 kubelet[3375]: I0510 00:00:44.395247 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2588d50e-17f5-4afc-bd91-334e12885edf-lib-modules\") pod \"kube-proxy-bfhtp\" (UID: \"2588d50e-17f5-4afc-bd91-334e12885edf\") " pod="kube-system/kube-proxy-bfhtp" May 10 00:00:44.510485 kubelet[3375]: I0510 00:00:44.510435 3375 topology_manager.go:215] "Topology Admit Handler" podUID="44360905-35ea-48ef-8936-ea411dfa2cae" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-gnsxl" May 10 00:00:44.596744 kubelet[3375]: I0510 00:00:44.596475 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/44360905-35ea-48ef-8936-ea411dfa2cae-var-lib-calico\") pod \"tigera-operator-797db67f8-gnsxl\" (UID: \"44360905-35ea-48ef-8936-ea411dfa2cae\") " pod="tigera-operator/tigera-operator-797db67f8-gnsxl" May 10 00:00:44.596744 kubelet[3375]: I0510 00:00:44.596536 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlh8h\" (UniqueName: \"kubernetes.io/projected/44360905-35ea-48ef-8936-ea411dfa2cae-kube-api-access-zlh8h\") pod \"tigera-operator-797db67f8-gnsxl\" (UID: \"44360905-35ea-48ef-8936-ea411dfa2cae\") " pod="tigera-operator/tigera-operator-797db67f8-gnsxl" May 10 00:00:44.622148 containerd[2047]: time="2025-05-10T00:00:44.622064147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bfhtp,Uid:2588d50e-17f5-4afc-bd91-334e12885edf,Namespace:kube-system,Attempt:0,}" May 10 00:00:44.679064 containerd[2047]: time="2025-05-10T00:00:44.678841559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:44.679899 containerd[2047]: time="2025-05-10T00:00:44.679702655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:44.679899 containerd[2047]: time="2025-05-10T00:00:44.679744919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:44.680287 containerd[2047]: time="2025-05-10T00:00:44.680132987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:44.766232 containerd[2047]: time="2025-05-10T00:00:44.766174308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bfhtp,Uid:2588d50e-17f5-4afc-bd91-334e12885edf,Namespace:kube-system,Attempt:0,} returns sandbox id \"248627abce8e3aadfb49e48c6dfe55c8f4125a5c4f787f89edcd572a7c355f9d\"" May 10 00:00:44.772708 containerd[2047]: time="2025-05-10T00:00:44.772648332Z" level=info msg="CreateContainer within sandbox \"248627abce8e3aadfb49e48c6dfe55c8f4125a5c4f787f89edcd572a7c355f9d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:00:44.802961 containerd[2047]: time="2025-05-10T00:00:44.802886556Z" level=info msg="CreateContainer within sandbox \"248627abce8e3aadfb49e48c6dfe55c8f4125a5c4f787f89edcd572a7c355f9d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8cf2902a1ad6f5ed2c503f265d4fe533bc5bb822671505181d04d34879a0aba4\"" May 10 00:00:44.803785 containerd[2047]: time="2025-05-10T00:00:44.803713584Z" level=info msg="StartContainer for \"8cf2902a1ad6f5ed2c503f265d4fe533bc5bb822671505181d04d34879a0aba4\"" May 10 00:00:44.829978 containerd[2047]: time="2025-05-10T00:00:44.829657932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-gnsxl,Uid:44360905-35ea-48ef-8936-ea411dfa2cae,Namespace:tigera-operator,Attempt:0,}" May 10 00:00:44.894309 containerd[2047]: time="2025-05-10T00:00:44.893676792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:44.895249 containerd[2047]: time="2025-05-10T00:00:44.895136352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:44.895719 containerd[2047]: time="2025-05-10T00:00:44.895647768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:44.896433 containerd[2047]: time="2025-05-10T00:00:44.896304888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:44.934341 containerd[2047]: time="2025-05-10T00:00:44.933789097Z" level=info msg="StartContainer for \"8cf2902a1ad6f5ed2c503f265d4fe533bc5bb822671505181d04d34879a0aba4\" returns successfully" May 10 00:00:45.005156 containerd[2047]: time="2025-05-10T00:00:45.004837833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-gnsxl,Uid:44360905-35ea-48ef-8936-ea411dfa2cae,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"27249a29e0e0708132bd99e3685e42b6270be6cac558cde69490ce7200b5b164\"" May 10 00:00:45.008674 containerd[2047]: time="2025-05-10T00:00:45.008085129Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 10 00:00:45.778468 kubelet[3375]: I0510 00:00:45.778368 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bfhtp" podStartSLOduration=1.778342729 podStartE2EDuration="1.778342729s" podCreationTimestamp="2025-05-10 00:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:00:45.777842269 +0000 UTC m=+17.419475296" watchObservedRunningTime="2025-05-10 00:00:45.778342729 +0000 UTC m=+17.419975744" May 10 00:00:46.291860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775673595.mount: Deactivated successfully. May 10 00:00:47.201359 containerd[2047]: time="2025-05-10T00:00:47.200765220Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:47.203335 containerd[2047]: time="2025-05-10T00:00:47.203224548Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 10 00:00:47.205863 containerd[2047]: time="2025-05-10T00:00:47.205779204Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:47.211179 containerd[2047]: time="2025-05-10T00:00:47.211094160Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:47.213189 containerd[2047]: time="2025-05-10T00:00:47.212990556Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.203077071s" May 10 00:00:47.213189 containerd[2047]: time="2025-05-10T00:00:47.213054684Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 10 00:00:47.218895 containerd[2047]: time="2025-05-10T00:00:47.218699268Z" level=info msg="CreateContainer within sandbox \"27249a29e0e0708132bd99e3685e42b6270be6cac558cde69490ce7200b5b164\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 10 00:00:47.242679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143503756.mount: Deactivated successfully. May 10 00:00:47.247567 containerd[2047]: time="2025-05-10T00:00:47.247504428Z" level=info msg="CreateContainer within sandbox \"27249a29e0e0708132bd99e3685e42b6270be6cac558cde69490ce7200b5b164\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"477a43a7e6e6a72f4b4a3ffd7d9372caf3af1ab829bcb6aa70d2a1685043d472\"" May 10 00:00:47.249344 containerd[2047]: time="2025-05-10T00:00:47.248628336Z" level=info msg="StartContainer for \"477a43a7e6e6a72f4b4a3ffd7d9372caf3af1ab829bcb6aa70d2a1685043d472\"" May 10 00:00:47.309370 systemd[1]: run-containerd-runc-k8s.io-477a43a7e6e6a72f4b4a3ffd7d9372caf3af1ab829bcb6aa70d2a1685043d472-runc.UoG5ll.mount: Deactivated successfully. May 10 00:00:47.359291 containerd[2047]: time="2025-05-10T00:00:47.358406077Z" level=info msg="StartContainer for \"477a43a7e6e6a72f4b4a3ffd7d9372caf3af1ab829bcb6aa70d2a1685043d472\" returns successfully" May 10 00:00:48.591773 kubelet[3375]: I0510 00:00:48.591544 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-gnsxl" podStartSLOduration=2.383033844 podStartE2EDuration="4.591517443s" podCreationTimestamp="2025-05-10 00:00:44 +0000 UTC" firstStartedPulling="2025-05-10 00:00:45.006773709 +0000 UTC m=+16.648406736" lastFinishedPulling="2025-05-10 00:00:47.215257356 +0000 UTC m=+18.856890335" observedRunningTime="2025-05-10 00:00:47.784948599 +0000 UTC m=+19.426581602" watchObservedRunningTime="2025-05-10 00:00:48.591517443 +0000 UTC m=+20.233150434" May 10 00:00:52.250836 kubelet[3375]: I0510 00:00:52.250754 3375 topology_manager.go:215] "Topology Admit Handler" podUID="9cb3d4ac-e19f-4734-94a6-0258d8701d69" podNamespace="calico-system" podName="calico-typha-cb7cccd9-9dkxb" May 10 00:00:52.363062 kubelet[3375]: I0510 00:00:52.362999 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cb3d4ac-e19f-4734-94a6-0258d8701d69-tigera-ca-bundle\") pod \"calico-typha-cb7cccd9-9dkxb\" (UID: \"9cb3d4ac-e19f-4734-94a6-0258d8701d69\") " pod="calico-system/calico-typha-cb7cccd9-9dkxb" May 10 00:00:52.363443 kubelet[3375]: I0510 00:00:52.363121 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9cb3d4ac-e19f-4734-94a6-0258d8701d69-typha-certs\") pod \"calico-typha-cb7cccd9-9dkxb\" (UID: \"9cb3d4ac-e19f-4734-94a6-0258d8701d69\") " pod="calico-system/calico-typha-cb7cccd9-9dkxb" May 10 00:00:52.363443 kubelet[3375]: I0510 00:00:52.363191 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzt48\" (UniqueName: \"kubernetes.io/projected/9cb3d4ac-e19f-4734-94a6-0258d8701d69-kube-api-access-pzt48\") pod \"calico-typha-cb7cccd9-9dkxb\" (UID: \"9cb3d4ac-e19f-4734-94a6-0258d8701d69\") " pod="calico-system/calico-typha-cb7cccd9-9dkxb" May 10 00:00:52.434608 kubelet[3375]: I0510 00:00:52.434511 3375 topology_manager.go:215] "Topology Admit Handler" podUID="81566b29-d299-49e2-9d9e-689a34157e6f" podNamespace="calico-system" podName="calico-node-85kh5" May 10 00:00:52.464299 kubelet[3375]: I0510 00:00:52.463470 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81566b29-d299-49e2-9d9e-689a34157e6f-lib-modules\") pod \"calico-node-85kh5\" (UID: \"81566b29-d299-49e2-9d9e-689a34157e6f\") " pod="calico-system/calico-node-85kh5" May 10 00:00:52.464299 kubelet[3375]: I0510 00:00:52.463546 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/81566b29-d299-49e2-9d9e-689a34157e6f-var-lib-calico\") pod \"calico-node-85kh5\" (UID: \"81566b29-d299-49e2-9d9e-689a34157e6f\") " pod="calico-system/calico-node-85kh5" May 10 00:00:52.464299 kubelet[3375]: I0510 00:00:52.463588 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81566b29-d299-49e2-9d9e-689a34157e6f-xtables-lock\") pod \"calico-node-85kh5\" (UID: \"81566b29-d299-49e2-9d9e-689a34157e6f\") " pod="calico-system/calico-node-85kh5" May 10 00:00:52.464299 kubelet[3375]: I0510 00:00:52.463649 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/81566b29-d299-49e2-9d9e-689a34157e6f-cni-log-dir\") pod \"calico-node-85kh5\" (UID: \"81566b29-d299-49e2-9d9e-689a34157e6f\") " pod="calico-system/calico-node-85kh5" May 10 00:00:52.464299 kubelet[3375]: I0510 00:00:52.463686 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/81566b29-d299-49e2-9d9e-689a34157e6f-flexvol-driver-host\") pod \"calico-node-85kh5\" (UID: \"81566b29-d299-49e2-9d9e-689a34157e6f\") " pod="calico-system/calico-node-85kh5" May 10 00:00:52.464665 kubelet[3375]: I0510 00:00:52.463731 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81566b29-d299-49e2-9d9e-689a34157e6f-tigera-ca-bundle\") pod \"calico-node-85kh5\" (UID: \"81566b29-d299-49e2-9d9e-689a34157e6f\") " pod="calico-system/calico-node-85kh5" May 10 00:00:52.464665 kubelet[3375]: I0510 00:00:52.463765 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/81566b29-d299-49e2-9d9e-689a34157e6f-node-certs\") pod \"calico-node-85kh5\" (UID: \"81566b29-d299-49e2-9d9e-689a34157e6f\") " pod="calico-system/calico-node-85kh5" May 10 00:00:52.464665 kubelet[3375]: I0510 00:00:52.463802 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/81566b29-d299-49e2-9d9e-689a34157e6f-var-run-calico\") pod \"calico-node-85kh5\" (UID: \"81566b29-d299-49e2-9d9e-689a34157e6f\") " pod="calico-system/calico-node-85kh5" May 10 00:00:52.464665 kubelet[3375]: I0510 00:00:52.463844 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/81566b29-d299-49e2-9d9e-689a34157e6f-cni-bin-dir\") pod \"calico-node-85kh5\" (UID: \"81566b29-d299-49e2-9d9e-689a34157e6f\") " pod="calico-system/calico-node-85kh5" May 10 00:00:52.464665 kubelet[3375]: I0510 00:00:52.463880 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6542b\" (UniqueName: \"kubernetes.io/projected/81566b29-d299-49e2-9d9e-689a34157e6f-kube-api-access-6542b\") pod \"calico-node-85kh5\" (UID: \"81566b29-d299-49e2-9d9e-689a34157e6f\") " pod="calico-system/calico-node-85kh5" May 10 00:00:52.464931 kubelet[3375]: I0510 00:00:52.463932 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/81566b29-d299-49e2-9d9e-689a34157e6f-policysync\") pod \"calico-node-85kh5\" (UID: \"81566b29-d299-49e2-9d9e-689a34157e6f\") " pod="calico-system/calico-node-85kh5" May 10 00:00:52.464931 kubelet[3375]: I0510 00:00:52.463966 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/81566b29-d299-49e2-9d9e-689a34157e6f-cni-net-dir\") pod \"calico-node-85kh5\" (UID: \"81566b29-d299-49e2-9d9e-689a34157e6f\") " pod="calico-system/calico-node-85kh5" May 10 00:00:52.583439 kubelet[3375]: E0510 00:00:52.582978 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.583439 kubelet[3375]: W0510 00:00:52.583008 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.583439 kubelet[3375]: E0510 00:00:52.583198 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.590387 kubelet[3375]: E0510 00:00:52.588788 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.590387 kubelet[3375]: W0510 00:00:52.588824 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.590387 kubelet[3375]: E0510 00:00:52.588913 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.590919 containerd[2047]: time="2025-05-10T00:00:52.590708695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cb7cccd9-9dkxb,Uid:9cb3d4ac-e19f-4734-94a6-0258d8701d69,Namespace:calico-system,Attempt:0,}" May 10 00:00:52.593850 kubelet[3375]: E0510 00:00:52.593191 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.593850 kubelet[3375]: W0510 00:00:52.593223 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.593850 kubelet[3375]: E0510 00:00:52.593295 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.602450 kubelet[3375]: E0510 00:00:52.598419 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.602450 kubelet[3375]: W0510 00:00:52.598454 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.602450 kubelet[3375]: E0510 00:00:52.598501 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.602715 kubelet[3375]: E0510 00:00:52.602486 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.602715 kubelet[3375]: W0510 00:00:52.602516 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.602715 kubelet[3375]: E0510 00:00:52.602549 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.609697 kubelet[3375]: E0510 00:00:52.609656 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.609917 kubelet[3375]: W0510 00:00:52.609886 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.610082 kubelet[3375]: E0510 00:00:52.610058 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.674897 kubelet[3375]: I0510 00:00:52.674335 3375 topology_manager.go:215] "Topology Admit Handler" podUID="0c2c3604-3d54-40f4-b6c5-046056f34637" podNamespace="calico-system" podName="csi-node-driver-j62d4" May 10 00:00:52.679447 kubelet[3375]: E0510 00:00:52.679244 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j62d4" podUID="0c2c3604-3d54-40f4-b6c5-046056f34637" May 10 00:00:52.696745 containerd[2047]: time="2025-05-10T00:00:52.696583171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:52.699413 containerd[2047]: time="2025-05-10T00:00:52.696693739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:52.699707 containerd[2047]: time="2025-05-10T00:00:52.699610051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:52.702074 containerd[2047]: time="2025-05-10T00:00:52.700039435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:52.756112 kubelet[3375]: E0510 00:00:52.754319 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.756112 kubelet[3375]: W0510 00:00:52.754380 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.756112 kubelet[3375]: E0510 00:00:52.754416 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.756112 kubelet[3375]: E0510 00:00:52.756029 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.756112 kubelet[3375]: W0510 00:00:52.756102 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.758193 kubelet[3375]: E0510 00:00:52.756169 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.758193 kubelet[3375]: E0510 00:00:52.756897 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.758193 kubelet[3375]: W0510 00:00:52.756922 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.758193 kubelet[3375]: E0510 00:00:52.756981 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.759797 kubelet[3375]: E0510 00:00:52.758433 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.759797 kubelet[3375]: W0510 00:00:52.758484 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.759797 kubelet[3375]: E0510 00:00:52.758517 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.762545 kubelet[3375]: E0510 00:00:52.761508 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.762545 kubelet[3375]: W0510 00:00:52.761779 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.762545 kubelet[3375]: E0510 00:00:52.761838 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.765587 kubelet[3375]: E0510 00:00:52.764583 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.765587 kubelet[3375]: W0510 00:00:52.764657 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.765587 kubelet[3375]: E0510 00:00:52.765148 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.768547 kubelet[3375]: E0510 00:00:52.767578 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.768547 kubelet[3375]: W0510 00:00:52.767637 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.768547 kubelet[3375]: E0510 00:00:52.767672 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.771291 kubelet[3375]: E0510 00:00:52.770400 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.771291 kubelet[3375]: W0510 00:00:52.770441 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.771291 kubelet[3375]: E0510 00:00:52.770626 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.774025 kubelet[3375]: E0510 00:00:52.773682 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.774025 kubelet[3375]: W0510 00:00:52.773722 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.774025 kubelet[3375]: E0510 00:00:52.773759 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.774025 kubelet[3375]: I0510 00:00:52.773809 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c2c3604-3d54-40f4-b6c5-046056f34637-kubelet-dir\") pod \"csi-node-driver-j62d4\" (UID: \"0c2c3604-3d54-40f4-b6c5-046056f34637\") " pod="calico-system/csi-node-driver-j62d4" May 10 00:00:52.778196 containerd[2047]: time="2025-05-10T00:00:52.777487231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-85kh5,Uid:81566b29-d299-49e2-9d9e-689a34157e6f,Namespace:calico-system,Attempt:0,}" May 10 00:00:52.779980 kubelet[3375]: E0510 00:00:52.778886 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.779980 kubelet[3375]: W0510 00:00:52.778936 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.779980 kubelet[3375]: E0510 00:00:52.779039 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.779980 kubelet[3375]: I0510 00:00:52.779086 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0c2c3604-3d54-40f4-b6c5-046056f34637-varrun\") pod \"csi-node-driver-j62d4\" (UID: \"0c2c3604-3d54-40f4-b6c5-046056f34637\") " pod="calico-system/csi-node-driver-j62d4" May 10 00:00:52.779980 kubelet[3375]: E0510 00:00:52.779661 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.779980 kubelet[3375]: W0510 00:00:52.779684 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.783503 kubelet[3375]: E0510 00:00:52.782685 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.783503 kubelet[3375]: W0510 00:00:52.782742 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.783503 kubelet[3375]: E0510 00:00:52.782778 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.783503 kubelet[3375]: E0510 00:00:52.782929 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.787818 kubelet[3375]: E0510 00:00:52.787346 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.787818 kubelet[3375]: W0510 00:00:52.787407 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.787818 kubelet[3375]: E0510 00:00:52.787448 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.791729 kubelet[3375]: E0510 00:00:52.791500 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.791729 kubelet[3375]: W0510 00:00:52.791556 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.791729 kubelet[3375]: E0510 00:00:52.791643 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.795765 kubelet[3375]: E0510 00:00:52.795358 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.795765 kubelet[3375]: W0510 00:00:52.795428 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.795765 kubelet[3375]: E0510 00:00:52.795539 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.797634 kubelet[3375]: E0510 00:00:52.796851 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.797634 kubelet[3375]: W0510 00:00:52.796906 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.802793 kubelet[3375]: E0510 00:00:52.798653 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.802793 kubelet[3375]: E0510 00:00:52.802587 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.802793 kubelet[3375]: W0510 00:00:52.802639 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.802793 kubelet[3375]: E0510 00:00:52.802673 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.804026 kubelet[3375]: E0510 00:00:52.803626 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.804858 kubelet[3375]: W0510 00:00:52.804198 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.804858 kubelet[3375]: E0510 00:00:52.804245 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.806432 kubelet[3375]: E0510 00:00:52.805956 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.806639 kubelet[3375]: W0510 00:00:52.806606 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.806784 kubelet[3375]: E0510 00:00:52.806759 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.810238 kubelet[3375]: E0510 00:00:52.809031 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.810238 kubelet[3375]: W0510 00:00:52.809066 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.810238 kubelet[3375]: E0510 00:00:52.809100 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.815308 kubelet[3375]: E0510 00:00:52.813000 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.815308 kubelet[3375]: W0510 00:00:52.813034 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.815308 kubelet[3375]: E0510 00:00:52.813085 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.815308 kubelet[3375]: E0510 00:00:52.814040 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.815308 kubelet[3375]: W0510 00:00:52.814065 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.815308 kubelet[3375]: E0510 00:00:52.814095 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.816384 kubelet[3375]: E0510 00:00:52.816110 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.816384 kubelet[3375]: W0510 00:00:52.816151 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.816384 kubelet[3375]: E0510 00:00:52.816185 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.819146 kubelet[3375]: E0510 00:00:52.818934 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.819146 kubelet[3375]: W0510 00:00:52.818967 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.819146 kubelet[3375]: E0510 00:00:52.818999 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.822370 kubelet[3375]: E0510 00:00:52.820741 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.822370 kubelet[3375]: W0510 00:00:52.820776 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.822370 kubelet[3375]: E0510 00:00:52.820829 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.822968 kubelet[3375]: E0510 00:00:52.822652 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.822968 kubelet[3375]: W0510 00:00:52.822683 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.822968 kubelet[3375]: E0510 00:00:52.822722 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.881520 kubelet[3375]: E0510 00:00:52.881067 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.881520 kubelet[3375]: W0510 00:00:52.881122 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.881520 kubelet[3375]: E0510 00:00:52.881156 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.888491 kubelet[3375]: E0510 00:00:52.887416 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.888491 kubelet[3375]: W0510 00:00:52.887472 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.888491 kubelet[3375]: E0510 00:00:52.887526 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.892584 kubelet[3375]: E0510 00:00:52.890979 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.892584 kubelet[3375]: W0510 00:00:52.891409 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.892584 kubelet[3375]: E0510 00:00:52.891770 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.899848 kubelet[3375]: E0510 00:00:52.898844 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.899848 kubelet[3375]: W0510 00:00:52.898879 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.901005 kubelet[3375]: E0510 00:00:52.900682 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.901005 kubelet[3375]: I0510 00:00:52.900750 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c2c3604-3d54-40f4-b6c5-046056f34637-socket-dir\") pod \"csi-node-driver-j62d4\" (UID: \"0c2c3604-3d54-40f4-b6c5-046056f34637\") " pod="calico-system/csi-node-driver-j62d4" May 10 00:00:52.902595 kubelet[3375]: E0510 00:00:52.901494 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.902595 kubelet[3375]: W0510 00:00:52.901521 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.906542 kubelet[3375]: E0510 00:00:52.906377 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.906956 containerd[2047]: time="2025-05-10T00:00:52.903760796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:52.906956 containerd[2047]: time="2025-05-10T00:00:52.903913064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:52.906956 containerd[2047]: time="2025-05-10T00:00:52.903950324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:52.906956 containerd[2047]: time="2025-05-10T00:00:52.904169300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:52.907215 kubelet[3375]: E0510 00:00:52.906739 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.907215 kubelet[3375]: W0510 00:00:52.906771 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.909710 kubelet[3375]: E0510 00:00:52.908143 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.910409 kubelet[3375]: E0510 00:00:52.910074 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.910409 kubelet[3375]: W0510 00:00:52.910107 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.910409 kubelet[3375]: E0510 00:00:52.910310 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.911224 kubelet[3375]: I0510 00:00:52.910678 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c2c3604-3d54-40f4-b6c5-046056f34637-registration-dir\") pod \"csi-node-driver-j62d4\" (UID: \"0c2c3604-3d54-40f4-b6c5-046056f34637\") " pod="calico-system/csi-node-driver-j62d4" May 10 00:00:52.911836 kubelet[3375]: E0510 00:00:52.911654 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.911836 kubelet[3375]: W0510 00:00:52.911711 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.911836 kubelet[3375]: E0510 00:00:52.911797 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.912822 kubelet[3375]: E0510 00:00:52.912794 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.913068 kubelet[3375]: W0510 00:00:52.912963 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.913244 kubelet[3375]: E0510 00:00:52.913163 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.913887 kubelet[3375]: E0510 00:00:52.913710 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.913887 kubelet[3375]: W0510 00:00:52.913736 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.914380 kubelet[3375]: E0510 00:00:52.914074 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.915623 kubelet[3375]: E0510 00:00:52.914717 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.915623 kubelet[3375]: W0510 00:00:52.914743 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.917144 kubelet[3375]: E0510 00:00:52.916415 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.917144 kubelet[3375]: I0510 00:00:52.916881 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cttdz\" (UniqueName: \"kubernetes.io/projected/0c2c3604-3d54-40f4-b6c5-046056f34637-kube-api-access-cttdz\") pod \"csi-node-driver-j62d4\" (UID: \"0c2c3604-3d54-40f4-b6c5-046056f34637\") " pod="calico-system/csi-node-driver-j62d4" May 10 00:00:52.918405 kubelet[3375]: E0510 00:00:52.918332 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.918405 kubelet[3375]: W0510 00:00:52.918372 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.919059 kubelet[3375]: E0510 00:00:52.918736 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.920481 kubelet[3375]: E0510 00:00:52.919790 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.920481 kubelet[3375]: W0510 00:00:52.919853 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.920481 kubelet[3375]: E0510 00:00:52.920015 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.921510 kubelet[3375]: E0510 00:00:52.921119 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.921510 kubelet[3375]: W0510 00:00:52.921145 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.921510 kubelet[3375]: E0510 00:00:52.921345 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.922663 kubelet[3375]: E0510 00:00:52.921595 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.922663 kubelet[3375]: W0510 00:00:52.921612 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.922663 kubelet[3375]: E0510 00:00:52.921773 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.925968 kubelet[3375]: E0510 00:00:52.924859 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.925968 kubelet[3375]: W0510 00:00:52.924897 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.925968 kubelet[3375]: E0510 00:00:52.925083 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.925968 kubelet[3375]: E0510 00:00:52.925376 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.925968 kubelet[3375]: W0510 00:00:52.925395 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.925968 kubelet[3375]: E0510 00:00:52.925502 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.927781 kubelet[3375]: E0510 00:00:52.927677 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.927781 kubelet[3375]: W0510 00:00:52.927761 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.928006 kubelet[3375]: E0510 00:00:52.927829 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.929724 kubelet[3375]: E0510 00:00:52.929431 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:52.929724 kubelet[3375]: W0510 00:00:52.929471 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:52.929724 kubelet[3375]: E0510 00:00:52.929506 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:52.982298 containerd[2047]: time="2025-05-10T00:00:52.982040313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cb7cccd9-9dkxb,Uid:9cb3d4ac-e19f-4734-94a6-0258d8701d69,Namespace:calico-system,Attempt:0,} returns sandbox id \"11fa647d2819458a6cd172d8ee0ad1fb2c45a4e2e088e4f608c122aed3179db5\"" May 10 00:00:52.993545 containerd[2047]: time="2025-05-10T00:00:52.993282861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 10 00:00:53.022816 kubelet[3375]: E0510 00:00:53.022586 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.022816 kubelet[3375]: W0510 00:00:53.022644 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.022816 kubelet[3375]: E0510 00:00:53.022681 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.024792 kubelet[3375]: E0510 00:00:53.024556 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.024792 kubelet[3375]: W0510 00:00:53.024625 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.024792 kubelet[3375]: E0510 00:00:53.024724 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.026103 kubelet[3375]: E0510 00:00:53.026058 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.026103 kubelet[3375]: W0510 00:00:53.026096 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.026305 kubelet[3375]: E0510 00:00:53.026180 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.030111 kubelet[3375]: E0510 00:00:53.029933 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.030111 kubelet[3375]: W0510 00:00:53.029966 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.030111 kubelet[3375]: E0510 00:00:53.030001 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.033489 kubelet[3375]: E0510 00:00:53.033074 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.033489 kubelet[3375]: W0510 00:00:53.033223 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.033904 kubelet[3375]: E0510 00:00:53.033258 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.037230 kubelet[3375]: E0510 00:00:53.037110 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.037230 kubelet[3375]: W0510 00:00:53.037147 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.037693 kubelet[3375]: E0510 00:00:53.037182 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.038635 kubelet[3375]: E0510 00:00:53.038365 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.038635 kubelet[3375]: W0510 00:00:53.038397 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.038635 kubelet[3375]: E0510 00:00:53.038441 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.039357 kubelet[3375]: E0510 00:00:53.039133 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.039357 kubelet[3375]: W0510 00:00:53.039157 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.039357 kubelet[3375]: E0510 00:00:53.039197 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.040425 kubelet[3375]: E0510 00:00:53.040075 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.040425 kubelet[3375]: W0510 00:00:53.040143 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.040425 kubelet[3375]: E0510 00:00:53.040206 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.041672 kubelet[3375]: E0510 00:00:53.041448 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.041672 kubelet[3375]: W0510 00:00:53.041491 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.041672 kubelet[3375]: E0510 00:00:53.041556 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.042400 kubelet[3375]: E0510 00:00:53.042156 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.042400 kubelet[3375]: W0510 00:00:53.042185 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.042400 kubelet[3375]: E0510 00:00:53.042249 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.043249 kubelet[3375]: E0510 00:00:53.043216 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.043556 kubelet[3375]: W0510 00:00:53.043249 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.043809 kubelet[3375]: E0510 00:00:53.043553 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.044640 kubelet[3375]: E0510 00:00:53.044526 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.044640 kubelet[3375]: W0510 00:00:53.044558 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.045028 kubelet[3375]: E0510 00:00:53.044588 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.045724 kubelet[3375]: E0510 00:00:53.045576 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.045724 kubelet[3375]: W0510 00:00:53.045603 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.045724 kubelet[3375]: E0510 00:00:53.045629 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.047601 kubelet[3375]: E0510 00:00:53.046402 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.047601 kubelet[3375]: W0510 00:00:53.046429 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.047601 kubelet[3375]: E0510 00:00:53.046454 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:53.062866 containerd[2047]: time="2025-05-10T00:00:53.062796449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-85kh5,Uid:81566b29-d299-49e2-9d9e-689a34157e6f,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ff31acd35e2ac8bdaf71c389d06ee45c0d915f7c19ee6607dfba7b0cf959008\"" May 10 00:00:53.069518 kubelet[3375]: E0510 00:00:53.069392 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:53.069518 kubelet[3375]: W0510 00:00:53.069424 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:53.069518 kubelet[3375]: E0510 00:00:53.069456 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:54.573410 kubelet[3375]: E0510 00:00:54.572801 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j62d4" podUID="0c2c3604-3d54-40f4-b6c5-046056f34637" May 10 00:00:55.169436 containerd[2047]: time="2025-05-10T00:00:55.169371295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:55.170869 containerd[2047]: time="2025-05-10T00:00:55.170817883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 10 00:00:55.171737 containerd[2047]: time="2025-05-10T00:00:55.171626203Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:55.175544 containerd[2047]: time="2025-05-10T00:00:55.175449295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:55.179102 containerd[2047]: time="2025-05-10T00:00:55.179029219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.185611154s" May 10 00:00:55.179102 containerd[2047]: time="2025-05-10T00:00:55.179095447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 10 00:00:55.186549 containerd[2047]: time="2025-05-10T00:00:55.185860327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 10 00:00:55.232750 containerd[2047]: time="2025-05-10T00:00:55.232484072Z" level=info msg="CreateContainer within sandbox \"11fa647d2819458a6cd172d8ee0ad1fb2c45a4e2e088e4f608c122aed3179db5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 10 00:00:55.250862 containerd[2047]: time="2025-05-10T00:00:55.250670852Z" level=info msg="CreateContainer within sandbox \"11fa647d2819458a6cd172d8ee0ad1fb2c45a4e2e088e4f608c122aed3179db5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f975e59a65fe0c8b9d635dfcab0281386fc915bf88da48c9d8060c5daafec88f\"" May 10 00:00:55.252308 containerd[2047]: time="2025-05-10T00:00:55.251970248Z" level=info msg="StartContainer for \"f975e59a65fe0c8b9d635dfcab0281386fc915bf88da48c9d8060c5daafec88f\"" May 10 00:00:55.383522 containerd[2047]: time="2025-05-10T00:00:55.383445932Z" level=info msg="StartContainer for \"f975e59a65fe0c8b9d635dfcab0281386fc915bf88da48c9d8060c5daafec88f\" returns successfully" May 10 00:00:55.831818 kubelet[3375]: I0510 00:00:55.831632 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cb7cccd9-9dkxb" podStartSLOduration=1.638329041 podStartE2EDuration="3.831607739s" podCreationTimestamp="2025-05-10 00:00:52 +0000 UTC" firstStartedPulling="2025-05-10 00:00:52.989360301 +0000 UTC m=+24.630993292" lastFinishedPulling="2025-05-10 00:00:55.182638999 +0000 UTC m=+26.824271990" observedRunningTime="2025-05-10 00:00:55.830368895 +0000 UTC m=+27.472001922" watchObservedRunningTime="2025-05-10 00:00:55.831607739 +0000 UTC m=+27.473240730" May 10 00:00:55.854182 kubelet[3375]: E0510 00:00:55.854149 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.854182 kubelet[3375]: W0510 00:00:55.854224 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.854681 kubelet[3375]: E0510 00:00:55.854254 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.855283 kubelet[3375]: E0510 00:00:55.855089 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.855283 kubelet[3375]: W0510 00:00:55.855111 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.855283 kubelet[3375]: E0510 00:00:55.855135 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.855642 kubelet[3375]: E0510 00:00:55.855621 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.855827 kubelet[3375]: W0510 00:00:55.855718 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.855827 kubelet[3375]: E0510 00:00:55.855743 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.856486 kubelet[3375]: E0510 00:00:55.856319 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.856486 kubelet[3375]: W0510 00:00:55.856342 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.856486 kubelet[3375]: E0510 00:00:55.856363 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.856879 kubelet[3375]: E0510 00:00:55.856833 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.856978 kubelet[3375]: W0510 00:00:55.856879 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.856978 kubelet[3375]: E0510 00:00:55.856901 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.857350 kubelet[3375]: E0510 00:00:55.857323 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.857414 kubelet[3375]: W0510 00:00:55.857349 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.857414 kubelet[3375]: E0510 00:00:55.857372 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.857791 kubelet[3375]: E0510 00:00:55.857755 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.857867 kubelet[3375]: W0510 00:00:55.857791 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.857867 kubelet[3375]: E0510 00:00:55.857813 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.858223 kubelet[3375]: E0510 00:00:55.858186 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.858352 kubelet[3375]: W0510 00:00:55.858223 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.858352 kubelet[3375]: E0510 00:00:55.858243 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.858763 kubelet[3375]: E0510 00:00:55.858725 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.858869 kubelet[3375]: W0510 00:00:55.858764 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.858869 kubelet[3375]: E0510 00:00:55.858786 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.859228 kubelet[3375]: E0510 00:00:55.859199 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.859328 kubelet[3375]: W0510 00:00:55.859229 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.859328 kubelet[3375]: E0510 00:00:55.859252 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.859692 kubelet[3375]: E0510 00:00:55.859665 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.859784 kubelet[3375]: W0510 00:00:55.859692 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.859784 kubelet[3375]: E0510 00:00:55.859716 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.860159 kubelet[3375]: E0510 00:00:55.860119 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.860159 kubelet[3375]: W0510 00:00:55.860148 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.860465 kubelet[3375]: E0510 00:00:55.860194 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.860764 kubelet[3375]: E0510 00:00:55.860736 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.860845 kubelet[3375]: W0510 00:00:55.860764 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.860845 kubelet[3375]: E0510 00:00:55.860788 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.861234 kubelet[3375]: E0510 00:00:55.861208 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.861345 kubelet[3375]: W0510 00:00:55.861237 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.861345 kubelet[3375]: E0510 00:00:55.861287 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.861711 kubelet[3375]: E0510 00:00:55.861685 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.861774 kubelet[3375]: W0510 00:00:55.861710 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.861774 kubelet[3375]: E0510 00:00:55.861732 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.862183 kubelet[3375]: E0510 00:00:55.862156 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.862251 kubelet[3375]: W0510 00:00:55.862184 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.862251 kubelet[3375]: E0510 00:00:55.862208 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.862664 kubelet[3375]: E0510 00:00:55.862637 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.862770 kubelet[3375]: W0510 00:00:55.862668 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.862770 kubelet[3375]: E0510 00:00:55.862703 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.863127 kubelet[3375]: E0510 00:00:55.863102 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.863189 kubelet[3375]: W0510 00:00:55.863127 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.863189 kubelet[3375]: E0510 00:00:55.863155 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.863625 kubelet[3375]: E0510 00:00:55.863599 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.863700 kubelet[3375]: W0510 00:00:55.863625 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.863700 kubelet[3375]: E0510 00:00:55.863658 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.864072 kubelet[3375]: E0510 00:00:55.864032 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.864140 kubelet[3375]: W0510 00:00:55.864074 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.864323 kubelet[3375]: E0510 00:00:55.864226 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.864662 kubelet[3375]: E0510 00:00:55.864634 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.864741 kubelet[3375]: W0510 00:00:55.864660 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.864741 kubelet[3375]: E0510 00:00:55.864696 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.865156 kubelet[3375]: E0510 00:00:55.865129 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.865224 kubelet[3375]: W0510 00:00:55.865155 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.865224 kubelet[3375]: E0510 00:00:55.865194 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.865801 kubelet[3375]: E0510 00:00:55.865771 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.865881 kubelet[3375]: W0510 00:00:55.865800 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.865881 kubelet[3375]: E0510 00:00:55.865836 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.866220 kubelet[3375]: E0510 00:00:55.866195 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.866326 kubelet[3375]: W0510 00:00:55.866220 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.866326 kubelet[3375]: E0510 00:00:55.866249 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.866765 kubelet[3375]: E0510 00:00:55.866738 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.866843 kubelet[3375]: W0510 00:00:55.866764 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.866966 kubelet[3375]: E0510 00:00:55.866920 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.867226 kubelet[3375]: E0510 00:00:55.867201 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.867379 kubelet[3375]: W0510 00:00:55.867225 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.867379 kubelet[3375]: E0510 00:00:55.867252 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.867736 kubelet[3375]: E0510 00:00:55.867711 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.867810 kubelet[3375]: W0510 00:00:55.867736 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.867810 kubelet[3375]: E0510 00:00:55.867770 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.868187 kubelet[3375]: E0510 00:00:55.868161 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.868250 kubelet[3375]: W0510 00:00:55.868186 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.868509 kubelet[3375]: E0510 00:00:55.868356 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.868685 kubelet[3375]: E0510 00:00:55.868660 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.868758 kubelet[3375]: W0510 00:00:55.868684 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.868758 kubelet[3375]: E0510 00:00:55.868718 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.869127 kubelet[3375]: E0510 00:00:55.869102 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.869192 kubelet[3375]: W0510 00:00:55.869127 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.869639 kubelet[3375]: E0510 00:00:55.869219 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.870184 kubelet[3375]: E0510 00:00:55.870129 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.870184 kubelet[3375]: W0510 00:00:55.870183 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.870577 kubelet[3375]: E0510 00:00:55.870223 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.871450 kubelet[3375]: E0510 00:00:55.871410 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.871450 kubelet[3375]: W0510 00:00:55.871441 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.871753 kubelet[3375]: E0510 00:00:55.871491 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:55.872725 kubelet[3375]: E0510 00:00:55.872647 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:00:55.872725 kubelet[3375]: W0510 00:00:55.872726 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:00:55.872884 kubelet[3375]: E0510 00:00:55.872756 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:00:56.464336 containerd[2047]: time="2025-05-10T00:00:56.463699114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:56.466214 containerd[2047]: time="2025-05-10T00:00:56.466154254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 10 00:00:56.468621 containerd[2047]: time="2025-05-10T00:00:56.468528766Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:56.474541 containerd[2047]: time="2025-05-10T00:00:56.474449806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:00:56.476426 containerd[2047]: time="2025-05-10T00:00:56.475511386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.289585599s" May 10 00:00:56.476426 containerd[2047]: time="2025-05-10T00:00:56.475573690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 10 00:00:56.487021 containerd[2047]: time="2025-05-10T00:00:56.486956338Z" level=info msg="CreateContainer within sandbox \"4ff31acd35e2ac8bdaf71c389d06ee45c0d915f7c19ee6607dfba7b0cf959008\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 10 00:00:56.519533 containerd[2047]: time="2025-05-10T00:00:56.519409186Z" level=info msg="CreateContainer within sandbox \"4ff31acd35e2ac8bdaf71c389d06ee45c0d915f7c19ee6607dfba7b0cf959008\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6ef0e9e2fe6a4b453b38f121aa306555aa49b509fc776a270548cc8f87df6ae1\"" May 10 00:00:56.522140 containerd[2047]: time="2025-05-10T00:00:56.520331434Z" level=info msg="StartContainer for \"6ef0e9e2fe6a4b453b38f121aa306555aa49b509fc776a270548cc8f87df6ae1\"" May 10 00:00:56.574801 kubelet[3375]: E0510 00:00:56.574425 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j62d4" podUID="0c2c3604-3d54-40f4-b6c5-046056f34637" May 10 00:00:56.639574 containerd[2047]: time="2025-05-10T00:00:56.639509951Z" level=info msg="StartContainer for \"6ef0e9e2fe6a4b453b38f121aa306555aa49b509fc776a270548cc8f87df6ae1\" returns successfully" May 10 00:00:56.715675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ef0e9e2fe6a4b453b38f121aa306555aa49b509fc776a270548cc8f87df6ae1-rootfs.mount: Deactivated successfully. May 10 00:00:56.822694 kubelet[3375]: I0510 00:00:56.821609 3375 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:00:56.938707 containerd[2047]: time="2025-05-10T00:00:56.938629584Z" level=info msg="shim disconnected" id=6ef0e9e2fe6a4b453b38f121aa306555aa49b509fc776a270548cc8f87df6ae1 namespace=k8s.io May 10 00:00:56.938707 containerd[2047]: time="2025-05-10T00:00:56.938703180Z" level=warning msg="cleaning up after shim disconnected" id=6ef0e9e2fe6a4b453b38f121aa306555aa49b509fc776a270548cc8f87df6ae1 namespace=k8s.io May 10 00:00:56.939102 containerd[2047]: time="2025-05-10T00:00:56.938725812Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:57.829355 containerd[2047]: time="2025-05-10T00:00:57.828862177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 10 00:00:58.569829 kubelet[3375]: E0510 00:00:58.569753 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j62d4" podUID="0c2c3604-3d54-40f4-b6c5-046056f34637" May 10 00:01:00.571022 kubelet[3375]: E0510 00:01:00.569349 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j62d4" podUID="0c2c3604-3d54-40f4-b6c5-046056f34637" May 10 00:01:01.632439 containerd[2047]: time="2025-05-10T00:01:01.632377131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:01.633949 containerd[2047]: time="2025-05-10T00:01:01.633898623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 10 00:01:01.634760 containerd[2047]: time="2025-05-10T00:01:01.634673451Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:01.644879 containerd[2047]: time="2025-05-10T00:01:01.644772928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:01.648724 containerd[2047]: time="2025-05-10T00:01:01.647755768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.818815531s" May 10 00:01:01.648724 containerd[2047]: time="2025-05-10T00:01:01.647818828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 10 00:01:01.654067 containerd[2047]: time="2025-05-10T00:01:01.654010624Z" level=info msg="CreateContainer within sandbox \"4ff31acd35e2ac8bdaf71c389d06ee45c0d915f7c19ee6607dfba7b0cf959008\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 10 00:01:01.678364 containerd[2047]: time="2025-05-10T00:01:01.677947132Z" level=info msg="CreateContainer within sandbox \"4ff31acd35e2ac8bdaf71c389d06ee45c0d915f7c19ee6607dfba7b0cf959008\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9407abfe6da281dce1d7413b65e9879518d0b068bc854f07b0f5261bda30afd8\"" May 10 00:01:01.681401 containerd[2047]: time="2025-05-10T00:01:01.680967040Z" level=info msg="StartContainer for \"9407abfe6da281dce1d7413b65e9879518d0b068bc854f07b0f5261bda30afd8\"" May 10 00:01:01.782414 containerd[2047]: time="2025-05-10T00:01:01.782206768Z" level=info msg="StartContainer for \"9407abfe6da281dce1d7413b65e9879518d0b068bc854f07b0f5261bda30afd8\" returns successfully" May 10 00:01:02.570495 kubelet[3375]: E0510 00:01:02.569844 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j62d4" podUID="0c2c3604-3d54-40f4-b6c5-046056f34637" May 10 00:01:02.680636 containerd[2047]: time="2025-05-10T00:01:02.680538401Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:01:02.709588 kubelet[3375]: I0510 00:01:02.709119 3375 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 00:01:02.728620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9407abfe6da281dce1d7413b65e9879518d0b068bc854f07b0f5261bda30afd8-rootfs.mount: Deactivated successfully. May 10 00:01:02.769607 kubelet[3375]: I0510 00:01:02.765001 3375 topology_manager.go:215] "Topology Admit Handler" podUID="674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lgml6" May 10 00:01:02.777233 kubelet[3375]: W0510 00:01:02.775871 3375 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-31-45" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-45' and this object May 10 00:01:02.777233 kubelet[3375]: E0510 00:01:02.775977 3375 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-31-45" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-45' and this object May 10 00:01:02.781895 kubelet[3375]: I0510 00:01:02.780667 3375 topology_manager.go:215] "Topology Admit Handler" podUID="db502810-c7ca-4ff1-88e0-b13b0b451d63" podNamespace="calico-apiserver" podName="calico-apiserver-849c5f8b95-f5vqw" May 10 00:01:02.785949 kubelet[3375]: I0510 00:01:02.784925 3375 topology_manager.go:215] "Topology Admit Handler" podUID="fb912ddc-f907-4e24-afb8-4c04709bf4e6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jxvbb" May 10 00:01:02.793878 kubelet[3375]: I0510 00:01:02.791768 3375 topology_manager.go:215] "Topology Admit Handler" podUID="91e59d68-96e3-41cb-8fed-74ab2b45acc7" podNamespace="calico-system" podName="calico-kube-controllers-7d786f5787-wbk9p" May 10 00:01:02.813066 kubelet[3375]: I0510 00:01:02.810845 3375 topology_manager.go:215] "Topology Admit Handler" podUID="32fc493b-ac7b-4ffe-adf4-cc372f4e4150" podNamespace="calico-apiserver" podName="calico-apiserver-849c5f8b95-l4kql" May 10 00:01:02.819810 kubelet[3375]: I0510 00:01:02.819771 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb912ddc-f907-4e24-afb8-4c04709bf4e6-config-volume\") pod \"coredns-7db6d8ff4d-jxvbb\" (UID: \"fb912ddc-f907-4e24-afb8-4c04709bf4e6\") " pod="kube-system/coredns-7db6d8ff4d-jxvbb" May 10 00:01:02.825517 kubelet[3375]: I0510 00:01:02.820424 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91e59d68-96e3-41cb-8fed-74ab2b45acc7-tigera-ca-bundle\") pod \"calico-kube-controllers-7d786f5787-wbk9p\" (UID: \"91e59d68-96e3-41cb-8fed-74ab2b45acc7\") " pod="calico-system/calico-kube-controllers-7d786f5787-wbk9p" May 10 00:01:02.825517 kubelet[3375]: I0510 00:01:02.820482 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpbxf\" (UniqueName: \"kubernetes.io/projected/91e59d68-96e3-41cb-8fed-74ab2b45acc7-kube-api-access-hpbxf\") pod \"calico-kube-controllers-7d786f5787-wbk9p\" (UID: \"91e59d68-96e3-41cb-8fed-74ab2b45acc7\") " pod="calico-system/calico-kube-controllers-7d786f5787-wbk9p" May 10 00:01:02.825517 kubelet[3375]: I0510 00:01:02.820527 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6-config-volume\") pod \"coredns-7db6d8ff4d-lgml6\" (UID: \"674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6\") " pod="kube-system/coredns-7db6d8ff4d-lgml6" May 10 00:01:02.825517 kubelet[3375]: I0510 00:01:02.820567 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db502810-c7ca-4ff1-88e0-b13b0b451d63-calico-apiserver-certs\") pod \"calico-apiserver-849c5f8b95-f5vqw\" (UID: \"db502810-c7ca-4ff1-88e0-b13b0b451d63\") " pod="calico-apiserver/calico-apiserver-849c5f8b95-f5vqw" May 10 00:01:02.825517 kubelet[3375]: I0510 00:01:02.820608 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/32fc493b-ac7b-4ffe-adf4-cc372f4e4150-calico-apiserver-certs\") pod \"calico-apiserver-849c5f8b95-l4kql\" (UID: \"32fc493b-ac7b-4ffe-adf4-cc372f4e4150\") " pod="calico-apiserver/calico-apiserver-849c5f8b95-l4kql" May 10 00:01:02.825898 kubelet[3375]: I0510 00:01:02.820652 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdm4f\" (UniqueName: \"kubernetes.io/projected/db502810-c7ca-4ff1-88e0-b13b0b451d63-kube-api-access-jdm4f\") pod \"calico-apiserver-849c5f8b95-f5vqw\" (UID: \"db502810-c7ca-4ff1-88e0-b13b0b451d63\") " pod="calico-apiserver/calico-apiserver-849c5f8b95-f5vqw" May 10 00:01:02.825898 kubelet[3375]: I0510 00:01:02.820695 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf27k\" (UniqueName: \"kubernetes.io/projected/fb912ddc-f907-4e24-afb8-4c04709bf4e6-kube-api-access-cf27k\") pod \"coredns-7db6d8ff4d-jxvbb\" (UID: \"fb912ddc-f907-4e24-afb8-4c04709bf4e6\") " pod="kube-system/coredns-7db6d8ff4d-jxvbb" May 10 00:01:02.825898 kubelet[3375]: I0510 00:01:02.820735 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9jgh\" (UniqueName: \"kubernetes.io/projected/674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6-kube-api-access-q9jgh\") pod \"coredns-7db6d8ff4d-lgml6\" (UID: \"674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6\") " pod="kube-system/coredns-7db6d8ff4d-lgml6" May 10 00:01:02.825898 kubelet[3375]: I0510 00:01:02.820783 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv5cz\" (UniqueName: \"kubernetes.io/projected/32fc493b-ac7b-4ffe-adf4-cc372f4e4150-kube-api-access-pv5cz\") pod \"calico-apiserver-849c5f8b95-l4kql\" (UID: \"32fc493b-ac7b-4ffe-adf4-cc372f4e4150\") " pod="calico-apiserver/calico-apiserver-849c5f8b95-l4kql" May 10 00:01:03.164391 containerd[2047]: time="2025-05-10T00:01:03.164237463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849c5f8b95-f5vqw,Uid:db502810-c7ca-4ff1-88e0-b13b0b451d63,Namespace:calico-apiserver,Attempt:0,}" May 10 00:01:03.190079 containerd[2047]: time="2025-05-10T00:01:03.190028871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d786f5787-wbk9p,Uid:91e59d68-96e3-41cb-8fed-74ab2b45acc7,Namespace:calico-system,Attempt:0,}" May 10 00:01:03.194817 containerd[2047]: time="2025-05-10T00:01:03.194757495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849c5f8b95-l4kql,Uid:32fc493b-ac7b-4ffe-adf4-cc372f4e4150,Namespace:calico-apiserver,Attempt:0,}" May 10 00:01:03.701219 containerd[2047]: time="2025-05-10T00:01:03.701142078Z" level=info msg="shim disconnected" id=9407abfe6da281dce1d7413b65e9879518d0b068bc854f07b0f5261bda30afd8 namespace=k8s.io May 10 00:01:03.701219 containerd[2047]: time="2025-05-10T00:01:03.701214330Z" level=warning msg="cleaning up after shim disconnected" id=9407abfe6da281dce1d7413b65e9879518d0b068bc854f07b0f5261bda30afd8 namespace=k8s.io May 10 00:01:03.702798 containerd[2047]: time="2025-05-10T00:01:03.701236650Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:01:03.906307 containerd[2047]: time="2025-05-10T00:01:03.906084391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 10 00:01:03.924343 kubelet[3375]: E0510 00:01:03.922886 3375 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 10 00:01:03.924343 kubelet[3375]: E0510 00:01:03.923031 3375 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fb912ddc-f907-4e24-afb8-4c04709bf4e6-config-volume podName:fb912ddc-f907-4e24-afb8-4c04709bf4e6 nodeName:}" failed. No retries permitted until 2025-05-10 00:01:04.422999651 +0000 UTC m=+36.064632642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fb912ddc-f907-4e24-afb8-4c04709bf4e6-config-volume") pod "coredns-7db6d8ff4d-jxvbb" (UID: "fb912ddc-f907-4e24-afb8-4c04709bf4e6") : failed to sync configmap cache: timed out waiting for the condition May 10 00:01:03.932151 kubelet[3375]: E0510 00:01:03.930131 3375 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 10 00:01:03.932151 kubelet[3375]: E0510 00:01:03.930244 3375 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6-config-volume podName:674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6 nodeName:}" failed. No retries permitted until 2025-05-10 00:01:04.430215815 +0000 UTC m=+36.071848806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6-config-volume") pod "coredns-7db6d8ff4d-lgml6" (UID: "674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6") : failed to sync configmap cache: timed out waiting for the condition May 10 00:01:03.959202 containerd[2047]: time="2025-05-10T00:01:03.958399243Z" level=error msg="Failed to destroy network for sandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:03.966987 containerd[2047]: time="2025-05-10T00:01:03.966619723Z" level=error msg="encountered an error cleaning up failed sandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:03.966903 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a-shm.mount: Deactivated successfully. May 10 00:01:03.968125 containerd[2047]: time="2025-05-10T00:01:03.967775935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849c5f8b95-f5vqw,Uid:db502810-c7ca-4ff1-88e0-b13b0b451d63,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:03.968851 kubelet[3375]: E0510 00:01:03.968653 3375 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:03.969163 kubelet[3375]: E0510 00:01:03.968809 3375 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849c5f8b95-f5vqw" May 10 00:01:03.969163 kubelet[3375]: E0510 00:01:03.969031 3375 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849c5f8b95-f5vqw" May 10 00:01:03.969863 kubelet[3375]: E0510 00:01:03.969647 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849c5f8b95-f5vqw_calico-apiserver(db502810-c7ca-4ff1-88e0-b13b0b451d63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849c5f8b95-f5vqw_calico-apiserver(db502810-c7ca-4ff1-88e0-b13b0b451d63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849c5f8b95-f5vqw" podUID="db502810-c7ca-4ff1-88e0-b13b0b451d63" May 10 00:01:03.990316 containerd[2047]: time="2025-05-10T00:01:03.990038203Z" level=error msg="Failed to destroy network for sandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:03.995466 containerd[2047]: time="2025-05-10T00:01:03.993457495Z" level=error msg="encountered an error cleaning up failed sandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:03.995466 containerd[2047]: time="2025-05-10T00:01:03.993575647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d786f5787-wbk9p,Uid:91e59d68-96e3-41cb-8fed-74ab2b45acc7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:03.997232 kubelet[3375]: E0510 00:01:03.995553 3375 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:03.997232 kubelet[3375]: E0510 00:01:03.995623 3375 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d786f5787-wbk9p" May 10 00:01:03.997232 kubelet[3375]: E0510 00:01:03.995656 3375 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d786f5787-wbk9p" May 10 00:01:03.996386 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a-shm.mount: Deactivated successfully. May 10 00:01:03.997630 kubelet[3375]: E0510 00:01:03.995719 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d786f5787-wbk9p_calico-system(91e59d68-96e3-41cb-8fed-74ab2b45acc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d786f5787-wbk9p_calico-system(91e59d68-96e3-41cb-8fed-74ab2b45acc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d786f5787-wbk9p" podUID="91e59d68-96e3-41cb-8fed-74ab2b45acc7" May 10 00:01:04.012795 containerd[2047]: time="2025-05-10T00:01:04.010036383Z" level=error msg="Failed to destroy network for sandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.012795 containerd[2047]: time="2025-05-10T00:01:04.012071907Z" level=error msg="encountered an error cleaning up failed sandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.012795 containerd[2047]: time="2025-05-10T00:01:04.012155247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849c5f8b95-l4kql,Uid:32fc493b-ac7b-4ffe-adf4-cc372f4e4150,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.017222 kubelet[3375]: E0510 00:01:04.013492 3375 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.017222 kubelet[3375]: E0510 00:01:04.013585 3375 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849c5f8b95-l4kql" May 10 00:01:04.017222 kubelet[3375]: E0510 00:01:04.013619 3375 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849c5f8b95-l4kql" May 10 00:01:04.016630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8-shm.mount: Deactivated successfully. May 10 00:01:04.017680 kubelet[3375]: E0510 00:01:04.013699 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849c5f8b95-l4kql_calico-apiserver(32fc493b-ac7b-4ffe-adf4-cc372f4e4150)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849c5f8b95-l4kql_calico-apiserver(32fc493b-ac7b-4ffe-adf4-cc372f4e4150)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849c5f8b95-l4kql" podUID="32fc493b-ac7b-4ffe-adf4-cc372f4e4150" May 10 00:01:04.574724 containerd[2047]: time="2025-05-10T00:01:04.574594110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j62d4,Uid:0c2c3604-3d54-40f4-b6c5-046056f34637,Namespace:calico-system,Attempt:0,}" May 10 00:01:04.612197 containerd[2047]: time="2025-05-10T00:01:04.611674098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgml6,Uid:674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6,Namespace:kube-system,Attempt:0,}" May 10 00:01:04.682960 containerd[2047]: time="2025-05-10T00:01:04.682884787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jxvbb,Uid:fb912ddc-f907-4e24-afb8-4c04709bf4e6,Namespace:kube-system,Attempt:0,}" May 10 00:01:04.739548 containerd[2047]: time="2025-05-10T00:01:04.739455511Z" level=error msg="Failed to destroy network for sandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.740872 containerd[2047]: time="2025-05-10T00:01:04.740801815Z" level=error msg="encountered an error cleaning up failed sandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.744552 containerd[2047]: time="2025-05-10T00:01:04.743967295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j62d4,Uid:0c2c3604-3d54-40f4-b6c5-046056f34637,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.747688 kubelet[3375]: E0510 00:01:04.747586 3375 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.747688 kubelet[3375]: E0510 00:01:04.747670 3375 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j62d4" May 10 00:01:04.748372 kubelet[3375]: E0510 00:01:04.747705 3375 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j62d4" May 10 00:01:04.748871 kubelet[3375]: E0510 00:01:04.748805 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j62d4_calico-system(0c2c3604-3d54-40f4-b6c5-046056f34637)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j62d4_calico-system(0c2c3604-3d54-40f4-b6c5-046056f34637)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j62d4" podUID="0c2c3604-3d54-40f4-b6c5-046056f34637" May 10 00:01:04.766220 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71-shm.mount: Deactivated successfully. May 10 00:01:04.796910 containerd[2047]: time="2025-05-10T00:01:04.796847611Z" level=error msg="Failed to destroy network for sandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.797676 containerd[2047]: time="2025-05-10T00:01:04.797625163Z" level=error msg="encountered an error cleaning up failed sandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.797893 containerd[2047]: time="2025-05-10T00:01:04.797850379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgml6,Uid:674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.798296 kubelet[3375]: E0510 00:01:04.798233 3375 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.798532 kubelet[3375]: E0510 00:01:04.798490 3375 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lgml6" May 10 00:01:04.798787 kubelet[3375]: E0510 00:01:04.798632 3375 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lgml6" May 10 00:01:04.798787 kubelet[3375]: E0510 00:01:04.798729 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lgml6_kube-system(674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lgml6_kube-system(674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lgml6" podUID="674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6" May 10 00:01:04.807667 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41-shm.mount: Deactivated successfully. May 10 00:01:04.849876 containerd[2047]: time="2025-05-10T00:01:04.849565087Z" level=error msg="Failed to destroy network for sandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.850918 containerd[2047]: time="2025-05-10T00:01:04.850609087Z" level=error msg="encountered an error cleaning up failed sandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.850918 containerd[2047]: time="2025-05-10T00:01:04.850752319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jxvbb,Uid:fb912ddc-f907-4e24-afb8-4c04709bf4e6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.853312 kubelet[3375]: E0510 00:01:04.852601 3375 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:04.853312 kubelet[3375]: E0510 00:01:04.852818 3375 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jxvbb" May 10 00:01:04.853312 kubelet[3375]: E0510 00:01:04.852875 3375 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jxvbb" May 10 00:01:04.853778 kubelet[3375]: E0510 00:01:04.852950 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jxvbb_kube-system(fb912ddc-f907-4e24-afb8-4c04709bf4e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jxvbb_kube-system(fb912ddc-f907-4e24-afb8-4c04709bf4e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jxvbb" podUID="fb912ddc-f907-4e24-afb8-4c04709bf4e6" May 10 00:01:04.859750 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321-shm.mount: Deactivated successfully. May 10 00:01:04.904097 kubelet[3375]: I0510 00:01:04.903860 3375 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:04.905741 containerd[2047]: time="2025-05-10T00:01:04.904991060Z" level=info msg="StopPodSandbox for \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\"" May 10 00:01:04.905741 containerd[2047]: time="2025-05-10T00:01:04.905405540Z" level=info msg="Ensure that sandbox d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321 in task-service has been cleanup successfully" May 10 00:01:04.909641 kubelet[3375]: I0510 00:01:04.909478 3375 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:04.914181 containerd[2047]: time="2025-05-10T00:01:04.914074712Z" level=info msg="StopPodSandbox for \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\"" May 10 00:01:04.916219 containerd[2047]: time="2025-05-10T00:01:04.914408444Z" level=info msg="Ensure that sandbox 241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8 in task-service has been cleanup successfully" May 10 00:01:04.918387 kubelet[3375]: I0510 00:01:04.917635 3375 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:04.920993 containerd[2047]: time="2025-05-10T00:01:04.920307116Z" level=info msg="StopPodSandbox for \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\"" May 10 00:01:04.920993 containerd[2047]: time="2025-05-10T00:01:04.920705132Z" level=info msg="Ensure that sandbox 52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a in task-service has been cleanup successfully" May 10 00:01:04.928229 kubelet[3375]: I0510 00:01:04.928165 3375 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:04.930928 containerd[2047]: time="2025-05-10T00:01:04.929404556Z" level=info msg="StopPodSandbox for \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\"" May 10 00:01:04.930928 containerd[2047]: time="2025-05-10T00:01:04.929694716Z" level=info msg="Ensure that sandbox dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a in task-service has been cleanup successfully" May 10 00:01:04.941185 kubelet[3375]: I0510 00:01:04.941127 3375 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:04.944199 containerd[2047]: time="2025-05-10T00:01:04.943966184Z" level=info msg="StopPodSandbox for \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\"" May 10 00:01:04.949738 containerd[2047]: time="2025-05-10T00:01:04.949507952Z" level=info msg="Ensure that sandbox 24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41 in task-service has been cleanup successfully" May 10 00:01:04.951829 kubelet[3375]: I0510 00:01:04.951553 3375 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:04.959495 containerd[2047]: time="2025-05-10T00:01:04.959425892Z" level=info msg="StopPodSandbox for \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\"" May 10 00:01:04.960397 containerd[2047]: time="2025-05-10T00:01:04.959730836Z" level=info msg="Ensure that sandbox d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71 in task-service has been cleanup successfully" May 10 00:01:05.129734 containerd[2047]: time="2025-05-10T00:01:05.129423605Z" level=error msg="StopPodSandbox for \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\" failed" error="failed to destroy network for sandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:05.129874 kubelet[3375]: E0510 00:01:05.129763 3375 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:05.129944 kubelet[3375]: E0510 00:01:05.129837 3375 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321"} May 10 00:01:05.129944 kubelet[3375]: E0510 00:01:05.129923 3375 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb912ddc-f907-4e24-afb8-4c04709bf4e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:01:05.131786 kubelet[3375]: E0510 00:01:05.129962 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb912ddc-f907-4e24-afb8-4c04709bf4e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jxvbb" podUID="fb912ddc-f907-4e24-afb8-4c04709bf4e6" May 10 00:01:05.138516 containerd[2047]: time="2025-05-10T00:01:05.138337421Z" level=error msg="StopPodSandbox for \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\" failed" error="failed to destroy network for sandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:05.138894 kubelet[3375]: E0510 00:01:05.138665 3375 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:05.138894 kubelet[3375]: E0510 00:01:05.138735 3375 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a"} May 10 00:01:05.138894 kubelet[3375]: E0510 00:01:05.138792 3375 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91e59d68-96e3-41cb-8fed-74ab2b45acc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:01:05.138894 kubelet[3375]: E0510 00:01:05.138832 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91e59d68-96e3-41cb-8fed-74ab2b45acc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d786f5787-wbk9p" podUID="91e59d68-96e3-41cb-8fed-74ab2b45acc7" May 10 00:01:05.156378 containerd[2047]: time="2025-05-10T00:01:05.156032297Z" level=error msg="StopPodSandbox for \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\" failed" error="failed to destroy network for sandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:05.156685 kubelet[3375]: E0510 00:01:05.156388 3375 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:05.156685 kubelet[3375]: E0510 00:01:05.156453 3375 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8"} May 10 00:01:05.156685 kubelet[3375]: E0510 00:01:05.156517 3375 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"32fc493b-ac7b-4ffe-adf4-cc372f4e4150\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:01:05.156685 kubelet[3375]: E0510 00:01:05.156557 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"32fc493b-ac7b-4ffe-adf4-cc372f4e4150\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849c5f8b95-l4kql" podUID="32fc493b-ac7b-4ffe-adf4-cc372f4e4150" May 10 00:01:05.157855 containerd[2047]: time="2025-05-10T00:01:05.157785761Z" level=error msg="StopPodSandbox for \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\" failed" error="failed to destroy network for sandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:05.158456 kubelet[3375]: E0510 00:01:05.158085 3375 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:05.158456 kubelet[3375]: E0510 00:01:05.158195 3375 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a"} May 10 00:01:05.158456 kubelet[3375]: E0510 00:01:05.158253 3375 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"db502810-c7ca-4ff1-88e0-b13b0b451d63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:01:05.158456 kubelet[3375]: E0510 00:01:05.158322 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"db502810-c7ca-4ff1-88e0-b13b0b451d63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849c5f8b95-f5vqw" podUID="db502810-c7ca-4ff1-88e0-b13b0b451d63" May 10 00:01:05.161935 containerd[2047]: time="2025-05-10T00:01:05.161827133Z" level=error msg="StopPodSandbox for \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\" failed" error="failed to destroy network for sandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:05.162543 kubelet[3375]: E0510 00:01:05.162179 3375 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:05.162543 kubelet[3375]: E0510 00:01:05.162290 3375 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41"} May 10 00:01:05.162543 kubelet[3375]: E0510 00:01:05.162352 3375 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:01:05.162543 kubelet[3375]: E0510 00:01:05.162394 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lgml6" podUID="674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6" May 10 00:01:05.164983 containerd[2047]: time="2025-05-10T00:01:05.164895605Z" level=error msg="StopPodSandbox for \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\" failed" error="failed to destroy network for sandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:01:05.165407 kubelet[3375]: E0510 00:01:05.165258 3375 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:05.165495 kubelet[3375]: E0510 00:01:05.165446 3375 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71"} May 10 00:01:05.165566 kubelet[3375]: E0510 00:01:05.165535 3375 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0c2c3604-3d54-40f4-b6c5-046056f34637\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:01:05.165670 kubelet[3375]: E0510 00:01:05.165582 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0c2c3604-3d54-40f4-b6c5-046056f34637\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j62d4" podUID="0c2c3604-3d54-40f4-b6c5-046056f34637" May 10 00:01:05.778883 kubelet[3375]: I0510 00:01:05.778587 3375 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:01:10.084576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3889750393.mount: Deactivated successfully. May 10 00:01:10.152772 containerd[2047]: time="2025-05-10T00:01:10.152679766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:10.154547 containerd[2047]: time="2025-05-10T00:01:10.154477210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 10 00:01:10.155971 containerd[2047]: time="2025-05-10T00:01:10.155820718Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:10.160705 containerd[2047]: time="2025-05-10T00:01:10.160601794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:10.162472 containerd[2047]: time="2025-05-10T00:01:10.161879086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 6.255719863s" May 10 00:01:10.162472 containerd[2047]: time="2025-05-10T00:01:10.161952010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 10 00:01:10.204899 containerd[2047]: time="2025-05-10T00:01:10.204768862Z" level=info msg="CreateContainer within sandbox \"4ff31acd35e2ac8bdaf71c389d06ee45c0d915f7c19ee6607dfba7b0cf959008\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 10 00:01:10.230876 containerd[2047]: time="2025-05-10T00:01:10.230767462Z" level=info msg="CreateContainer within sandbox \"4ff31acd35e2ac8bdaf71c389d06ee45c0d915f7c19ee6607dfba7b0cf959008\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"39a75ec1e9735ed89f87fe61e0547457128a4ba68eae33d529017dc1efb22466\"" May 10 00:01:10.234188 containerd[2047]: time="2025-05-10T00:01:10.234131698Z" level=info msg="StartContainer for \"39a75ec1e9735ed89f87fe61e0547457128a4ba68eae33d529017dc1efb22466\"" May 10 00:01:10.240844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087230467.mount: Deactivated successfully. May 10 00:01:10.345283 containerd[2047]: time="2025-05-10T00:01:10.344771459Z" level=info msg="StartContainer for \"39a75ec1e9735ed89f87fe61e0547457128a4ba68eae33d529017dc1efb22466\" returns successfully" May 10 00:01:10.485380 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 10 00:01:10.485508 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 10 00:01:11.086087 kubelet[3375]: I0510 00:01:11.085994 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-85kh5" podStartSLOduration=1.964788765 podStartE2EDuration="19.063167734s" podCreationTimestamp="2025-05-10 00:00:52 +0000 UTC" firstStartedPulling="2025-05-10 00:00:53.065070233 +0000 UTC m=+24.706703224" lastFinishedPulling="2025-05-10 00:01:10.163449214 +0000 UTC m=+41.805082193" observedRunningTime="2025-05-10 00:01:11.060176506 +0000 UTC m=+42.701809521" watchObservedRunningTime="2025-05-10 00:01:11.063167734 +0000 UTC m=+42.704800737" May 10 00:01:12.598301 kernel: bpftool[4912]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 10 00:01:12.904096 systemd-networkd[1599]: vxlan.calico: Link UP May 10 00:01:12.904110 systemd-networkd[1599]: vxlan.calico: Gained carrier May 10 00:01:12.904366 (udev-worker)[4762]: Network interface NamePolicy= disabled on kernel command line. May 10 00:01:12.957405 (udev-worker)[4764]: Network interface NamePolicy= disabled on kernel command line. May 10 00:01:12.959777 (udev-worker)[4765]: Network interface NamePolicy= disabled on kernel command line. May 10 00:01:14.336623 systemd-networkd[1599]: vxlan.calico: Gained IPv6LL May 10 00:01:16.479767 systemd[1]: Started sshd@7-172.31.31.45:22-147.75.109.163:43954.service - OpenSSH per-connection server daemon (147.75.109.163:43954). May 10 00:01:16.571199 containerd[2047]: time="2025-05-10T00:01:16.570958458Z" level=info msg="StopPodSandbox for \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\"" May 10 00:01:16.573862 containerd[2047]: time="2025-05-10T00:01:16.571106802Z" level=info msg="StopPodSandbox for \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\"" May 10 00:01:16.672443 sshd[4988]: Accepted publickey for core from 147.75.109.163 port 43954 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:16.675813 sshd[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:16.694202 systemd-logind[2020]: New session 8 of user core. May 10 00:01:16.701154 systemd[1]: Started session-8.scope - Session 8 of User core. May 10 00:01:16.769596 ntpd[2003]: Listen normally on 6 vxlan.calico 192.168.1.0:123 May 10 00:01:16.770215 ntpd[2003]: 10 May 00:01:16 ntpd[2003]: Listen normally on 6 vxlan.calico 192.168.1.0:123 May 10 00:01:16.770215 ntpd[2003]: 10 May 00:01:16 ntpd[2003]: Listen normally on 7 vxlan.calico [fe80::6449:36ff:fef7:e47a%4]:123 May 10 00:01:16.769740 ntpd[2003]: Listen normally on 7 vxlan.calico [fe80::6449:36ff:fef7:e47a%4]:123 May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.746 [INFO][5013] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.747 [INFO][5013] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" iface="eth0" netns="/var/run/netns/cni-9aad308f-2123-d549-71c0-8d8fd0de3af3" May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.749 [INFO][5013] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" iface="eth0" netns="/var/run/netns/cni-9aad308f-2123-d549-71c0-8d8fd0de3af3" May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.749 [INFO][5013] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" iface="eth0" netns="/var/run/netns/cni-9aad308f-2123-d549-71c0-8d8fd0de3af3" May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.749 [INFO][5013] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.750 [INFO][5013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.824 [INFO][5031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" HandleID="k8s-pod-network.dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.824 [INFO][5031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.825 [INFO][5031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.858 [WARNING][5031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" HandleID="k8s-pod-network.dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.858 [INFO][5031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" HandleID="k8s-pod-network.dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.863 [INFO][5031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:16.891220 containerd[2047]: 2025-05-10 00:01:16.883 [INFO][5013] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:16.897975 containerd[2047]: time="2025-05-10T00:01:16.891442879Z" level=info msg="TearDown network for sandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\" successfully" May 10 00:01:16.897975 containerd[2047]: time="2025-05-10T00:01:16.891485347Z" level=info msg="StopPodSandbox for \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\" returns successfully" May 10 00:01:16.908883 containerd[2047]: time="2025-05-10T00:01:16.906484423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849c5f8b95-f5vqw,Uid:db502810-c7ca-4ff1-88e0-b13b0b451d63,Namespace:calico-apiserver,Attempt:1,}" May 10 00:01:16.907729 systemd[1]: run-netns-cni\x2d9aad308f\x2d2123\x2dd549\x2d71c0\x2d8d8fd0de3af3.mount: Deactivated successfully. May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.742 [INFO][5017] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.742 [INFO][5017] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" iface="eth0" netns="/var/run/netns/cni-fd611b3f-a7ec-92ef-1c49-004192f2619f" May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.744 [INFO][5017] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" iface="eth0" netns="/var/run/netns/cni-fd611b3f-a7ec-92ef-1c49-004192f2619f" May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.747 [INFO][5017] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" iface="eth0" netns="/var/run/netns/cni-fd611b3f-a7ec-92ef-1c49-004192f2619f" May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.748 [INFO][5017] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.750 [INFO][5017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.827 [INFO][5033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" HandleID="k8s-pod-network.24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.827 [INFO][5033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.863 [INFO][5033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.891 [WARNING][5033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" HandleID="k8s-pod-network.24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.892 [INFO][5033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" HandleID="k8s-pod-network.24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.899 [INFO][5033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:16.917540 containerd[2047]: 2025-05-10 00:01:16.911 [INFO][5017] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:16.923779 containerd[2047]: time="2025-05-10T00:01:16.918693535Z" level=info msg="TearDown network for sandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\" successfully" May 10 00:01:16.923779 containerd[2047]: time="2025-05-10T00:01:16.918741511Z" level=info msg="StopPodSandbox for \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\" returns successfully" May 10 00:01:16.927060 systemd[1]: run-netns-cni\x2dfd611b3f\x2da7ec\x2d92ef\x2d1c49\x2d004192f2619f.mount: Deactivated successfully. May 10 00:01:16.928106 containerd[2047]: time="2025-05-10T00:01:16.927942583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgml6,Uid:674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6,Namespace:kube-system,Attempt:1,}" May 10 00:01:17.172211 sshd[4988]: pam_unix(sshd:session): session closed for user core May 10 00:01:17.188015 systemd[1]: sshd@7-172.31.31.45:22-147.75.109.163:43954.service: Deactivated successfully. May 10 00:01:17.203155 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:01:17.207023 systemd-logind[2020]: Session 8 logged out. Waiting for processes to exit. May 10 00:01:17.215645 systemd-logind[2020]: Removed session 8. May 10 00:01:17.371331 systemd-networkd[1599]: cali5ea99bc4666: Link UP May 10 00:01:17.373631 systemd-networkd[1599]: cali5ea99bc4666: Gained carrier May 10 00:01:17.373881 (udev-worker)[5093]: Network interface NamePolicy= disabled on kernel command line. May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.194 [INFO][5066] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0 coredns-7db6d8ff4d- kube-system 674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6 827 0 2025-05-10 00:00:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-45 coredns-7db6d8ff4d-lgml6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ea99bc4666 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgml6" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-" May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.196 [INFO][5066] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgml6" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.276 [INFO][5081] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" HandleID="k8s-pod-network.a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.297 [INFO][5081] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" HandleID="k8s-pod-network.a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319ab0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-45", "pod":"coredns-7db6d8ff4d-lgml6", "timestamp":"2025-05-10 00:01:17.276842345 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.298 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.298 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.299 [INFO][5081] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.301 [INFO][5081] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" host="ip-172-31-31-45" May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.310 [INFO][5081] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-45" May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.318 [INFO][5081] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.321 [INFO][5081] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.326 [INFO][5081] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.326 [INFO][5081] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" host="ip-172-31-31-45" May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.328 [INFO][5081] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769 May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.337 [INFO][5081] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" host="ip-172-31-31-45" May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.347 [INFO][5081] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.1/26] block=192.168.1.0/26 handle="k8s-pod-network.a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" host="ip-172-31-31-45" May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.347 [INFO][5081] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.1/26] handle="k8s-pod-network.a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" host="ip-172-31-31-45" May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.347 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:17.423233 containerd[2047]: 2025-05-10 00:01:17.347 [INFO][5081] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.1/26] IPv6=[] ContainerID="a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" HandleID="k8s-pod-network.a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:17.428650 containerd[2047]: 2025-05-10 00:01:17.353 [INFO][5066] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgml6" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"coredns-7db6d8ff4d-lgml6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ea99bc4666", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:17.428650 containerd[2047]: 2025-05-10 00:01:17.354 [INFO][5066] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.1/32] ContainerID="a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgml6" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:17.428650 containerd[2047]: 2025-05-10 00:01:17.355 [INFO][5066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ea99bc4666 ContainerID="a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgml6" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:17.428650 containerd[2047]: 2025-05-10 00:01:17.374 [INFO][5066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgml6" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:17.428650 containerd[2047]: 2025-05-10 00:01:17.379 [INFO][5066] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgml6" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769", Pod:"coredns-7db6d8ff4d-lgml6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ea99bc4666", MAC:"1e:5a:2e:01:29:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:17.428650 containerd[2047]: 2025-05-10 00:01:17.412 [INFO][5066] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgml6" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:17.475016 (udev-worker)[5096]: Network interface NamePolicy= disabled on kernel command line. May 10 00:01:17.477529 systemd-networkd[1599]: calid0be7c0a1ed: Link UP May 10 00:01:17.479913 systemd-networkd[1599]: calid0be7c0a1ed: Gained carrier May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.200 [INFO][5053] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0 calico-apiserver-849c5f8b95- calico-apiserver db502810-c7ca-4ff1-88e0-b13b0b451d63 828 0 2025-05-10 00:00:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:849c5f8b95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-45 calico-apiserver-849c5f8b95-f5vqw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid0be7c0a1ed [] []}} ContainerID="f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-f5vqw" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-" May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.200 [INFO][5053] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-f5vqw" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.277 [INFO][5079] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" HandleID="k8s-pod-network.f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.310 [INFO][5079] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" HandleID="k8s-pod-network.f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c7e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-45", "pod":"calico-apiserver-849c5f8b95-f5vqw", "timestamp":"2025-05-10 00:01:17.277693085 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.310 [INFO][5079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.347 [INFO][5079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.348 [INFO][5079] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.351 [INFO][5079] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" host="ip-172-31-31-45" May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.360 [INFO][5079] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-45" May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.383 [INFO][5079] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.399 [INFO][5079] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.420 [INFO][5079] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.421 [INFO][5079] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" host="ip-172-31-31-45" May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.429 [INFO][5079] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56 May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.440 [INFO][5079] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" host="ip-172-31-31-45" May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.455 [INFO][5079] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.2/26] block=192.168.1.0/26 handle="k8s-pod-network.f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" host="ip-172-31-31-45" May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.456 [INFO][5079] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.2/26] handle="k8s-pod-network.f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" host="ip-172-31-31-45" May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.456 [INFO][5079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:17.513668 containerd[2047]: 2025-05-10 00:01:17.456 [INFO][5079] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.2/26] IPv6=[] ContainerID="f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" HandleID="k8s-pod-network.f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:17.515606 containerd[2047]: 2025-05-10 00:01:17.464 [INFO][5053] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-f5vqw" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0", GenerateName:"calico-apiserver-849c5f8b95-", Namespace:"calico-apiserver", SelfLink:"", UID:"db502810-c7ca-4ff1-88e0-b13b0b451d63", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849c5f8b95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"calico-apiserver-849c5f8b95-f5vqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0be7c0a1ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:17.515606 containerd[2047]: 2025-05-10 00:01:17.467 [INFO][5053] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.2/32] ContainerID="f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-f5vqw" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:17.515606 containerd[2047]: 2025-05-10 00:01:17.467 [INFO][5053] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0be7c0a1ed ContainerID="f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-f5vqw" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:17.515606 containerd[2047]: 2025-05-10 00:01:17.481 [INFO][5053] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-f5vqw" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:17.515606 containerd[2047]: 2025-05-10 00:01:17.483 [INFO][5053] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-f5vqw" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0", GenerateName:"calico-apiserver-849c5f8b95-", Namespace:"calico-apiserver", SelfLink:"", UID:"db502810-c7ca-4ff1-88e0-b13b0b451d63", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849c5f8b95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56", Pod:"calico-apiserver-849c5f8b95-f5vqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0be7c0a1ed", MAC:"82:6b:da:d8:fc:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:17.515606 containerd[2047]: 2025-05-10 00:01:17.502 [INFO][5053] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-f5vqw" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:17.541288 containerd[2047]: time="2025-05-10T00:01:17.537897486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:17.542330 containerd[2047]: time="2025-05-10T00:01:17.541540687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:17.542330 containerd[2047]: time="2025-05-10T00:01:17.541600315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:17.542330 containerd[2047]: time="2025-05-10T00:01:17.541782991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:17.572613 containerd[2047]: time="2025-05-10T00:01:17.572566411Z" level=info msg="StopPodSandbox for \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\"" May 10 00:01:17.574410 containerd[2047]: time="2025-05-10T00:01:17.573801787Z" level=info msg="StopPodSandbox for \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\"" May 10 00:01:17.587791 containerd[2047]: time="2025-05-10T00:01:17.578488483Z" level=info msg="StopPodSandbox for \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\"" May 10 00:01:17.664389 containerd[2047]: time="2025-05-10T00:01:17.657134155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:17.664389 containerd[2047]: time="2025-05-10T00:01:17.662484427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:17.664389 containerd[2047]: time="2025-05-10T00:01:17.662583991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:17.680517 containerd[2047]: time="2025-05-10T00:01:17.670311043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:17.718956 containerd[2047]: time="2025-05-10T00:01:17.718891423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgml6,Uid:674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6,Namespace:kube-system,Attempt:1,} returns sandbox id \"a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769\"" May 10 00:01:17.741064 containerd[2047]: time="2025-05-10T00:01:17.741007159Z" level=info msg="CreateContainer within sandbox \"a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:01:17.796846 containerd[2047]: time="2025-05-10T00:01:17.796548488Z" level=info msg="CreateContainer within sandbox \"a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ddfa496447653295edfb5021cbd31530f4e4018d95af1dd29afda4ceb4322dc0\"" May 10 00:01:17.799359 containerd[2047]: time="2025-05-10T00:01:17.798231056Z" level=info msg="StartContainer for \"ddfa496447653295edfb5021cbd31530f4e4018d95af1dd29afda4ceb4322dc0\"" May 10 00:01:18.042190 systemd[1]: run-containerd-runc-k8s.io-ddfa496447653295edfb5021cbd31530f4e4018d95af1dd29afda4ceb4322dc0-runc.yVKIl3.mount: Deactivated successfully. May 10 00:01:18.187898 containerd[2047]: time="2025-05-10T00:01:18.183442194Z" level=info msg="StartContainer for \"ddfa496447653295edfb5021cbd31530f4e4018d95af1dd29afda4ceb4322dc0\" returns successfully" May 10 00:01:18.232422 containerd[2047]: time="2025-05-10T00:01:18.232348278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849c5f8b95-f5vqw,Uid:db502810-c7ca-4ff1-88e0-b13b0b451d63,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56\"" May 10 00:01:18.261560 containerd[2047]: time="2025-05-10T00:01:18.261493746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:17.922 [INFO][5221] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:17.923 [INFO][5221] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" iface="eth0" netns="/var/run/netns/cni-8871ed9e-c645-ee77-1a88-7a2937896a78" May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:17.923 [INFO][5221] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" iface="eth0" netns="/var/run/netns/cni-8871ed9e-c645-ee77-1a88-7a2937896a78" May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:17.943 [INFO][5221] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" iface="eth0" netns="/var/run/netns/cni-8871ed9e-c645-ee77-1a88-7a2937896a78" May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:17.943 [INFO][5221] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:17.943 [INFO][5221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:18.236 [INFO][5271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" HandleID="k8s-pod-network.d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" Workload="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:18.238 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:18.241 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:18.285 [WARNING][5271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" HandleID="k8s-pod-network.d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" Workload="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:18.285 [INFO][5271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" HandleID="k8s-pod-network.d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" Workload="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:18.294 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:18.309946 containerd[2047]: 2025-05-10 00:01:18.299 [INFO][5221] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:18.316589 containerd[2047]: time="2025-05-10T00:01:18.310120542Z" level=info msg="TearDown network for sandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\" successfully" May 10 00:01:18.316589 containerd[2047]: time="2025-05-10T00:01:18.310162698Z" level=info msg="StopPodSandbox for \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\" returns successfully" May 10 00:01:18.316589 containerd[2047]: time="2025-05-10T00:01:18.312387318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j62d4,Uid:0c2c3604-3d54-40f4-b6c5-046056f34637,Namespace:calico-system,Attempt:1,}" May 10 00:01:18.330044 systemd[1]: run-netns-cni\x2d8871ed9e\x2dc645\x2dee77\x2d1a88\x2d7a2937896a78.mount: Deactivated successfully. May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:17.979 [INFO][5204] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:17.979 [INFO][5204] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" iface="eth0" netns="/var/run/netns/cni-fdc665be-de19-cd04-bf06-11b3b7f52e40" May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:17.980 [INFO][5204] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" iface="eth0" netns="/var/run/netns/cni-fdc665be-de19-cd04-bf06-11b3b7f52e40" May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:17.981 [INFO][5204] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" iface="eth0" netns="/var/run/netns/cni-fdc665be-de19-cd04-bf06-11b3b7f52e40" May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:17.982 [INFO][5204] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:17.985 [INFO][5204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:18.248 [INFO][5279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" HandleID="k8s-pod-network.241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:18.248 [INFO][5279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:18.294 [INFO][5279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:18.328 [WARNING][5279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" HandleID="k8s-pod-network.241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:18.328 [INFO][5279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" HandleID="k8s-pod-network.241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:18.337 [INFO][5279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:18.388088 containerd[2047]: 2025-05-10 00:01:18.356 [INFO][5204] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:18.395296 containerd[2047]: time="2025-05-10T00:01:18.391147843Z" level=info msg="TearDown network for sandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\" successfully" May 10 00:01:18.395296 containerd[2047]: time="2025-05-10T00:01:18.394064899Z" level=info msg="StopPodSandbox for \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\" returns successfully" May 10 00:01:18.397210 containerd[2047]: time="2025-05-10T00:01:18.396971587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849c5f8b95-l4kql,Uid:32fc493b-ac7b-4ffe-adf4-cc372f4e4150,Namespace:calico-apiserver,Attempt:1,}" May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.098 [INFO][5203] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.099 [INFO][5203] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" iface="eth0" netns="/var/run/netns/cni-36bf57d1-3cf8-ec8c-68df-3a13a31ec1e6" May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.103 [INFO][5203] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" iface="eth0" netns="/var/run/netns/cni-36bf57d1-3cf8-ec8c-68df-3a13a31ec1e6" May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.107 [INFO][5203] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" iface="eth0" netns="/var/run/netns/cni-36bf57d1-3cf8-ec8c-68df-3a13a31ec1e6" May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.107 [INFO][5203] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.108 [INFO][5203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.302 [INFO][5293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" HandleID="k8s-pod-network.d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.302 [INFO][5293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.337 [INFO][5293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.384 [WARNING][5293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" HandleID="k8s-pod-network.d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.384 [INFO][5293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" HandleID="k8s-pod-network.d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.392 [INFO][5293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:18.409055 containerd[2047]: 2025-05-10 00:01:18.402 [INFO][5203] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:18.410054 containerd[2047]: time="2025-05-10T00:01:18.409599631Z" level=info msg="TearDown network for sandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\" successfully" May 10 00:01:18.410054 containerd[2047]: time="2025-05-10T00:01:18.409757083Z" level=info msg="StopPodSandbox for \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\" returns successfully" May 10 00:01:18.412648 containerd[2047]: time="2025-05-10T00:01:18.412360747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jxvbb,Uid:fb912ddc-f907-4e24-afb8-4c04709bf4e6,Namespace:kube-system,Attempt:1,}" May 10 00:01:18.753651 systemd-networkd[1599]: calif7409448506: Link UP May 10 00:01:18.758842 systemd-networkd[1599]: calif7409448506: Gained carrier May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.496 [INFO][5326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0 csi-node-driver- calico-system 0c2c3604-3d54-40f4-b6c5-046056f34637 846 0 2025-05-10 00:00:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-45 csi-node-driver-j62d4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif7409448506 [] []}} ContainerID="c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" Namespace="calico-system" Pod="csi-node-driver-j62d4" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--j62d4-" May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.496 [INFO][5326] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" Namespace="calico-system" Pod="csi-node-driver-j62d4" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.619 [INFO][5359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" HandleID="k8s-pod-network.c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" Workload="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.648 [INFO][5359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" HandleID="k8s-pod-network.c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" Workload="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003fb910), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-45", "pod":"csi-node-driver-j62d4", "timestamp":"2025-05-10 00:01:18.61901462 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.648 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.649 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.649 [INFO][5359] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.656 [INFO][5359] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" host="ip-172-31-31-45" May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.671 [INFO][5359] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-45" May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.686 [INFO][5359] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.691 [INFO][5359] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.696 [INFO][5359] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.696 [INFO][5359] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" host="ip-172-31-31-45" May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.700 [INFO][5359] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.724 [INFO][5359] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" host="ip-172-31-31-45" May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.736 [INFO][5359] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.3/26] block=192.168.1.0/26 handle="k8s-pod-network.c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" host="ip-172-31-31-45" May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.738 [INFO][5359] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.3/26] handle="k8s-pod-network.c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" host="ip-172-31-31-45" May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.738 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:18.820401 containerd[2047]: 2025-05-10 00:01:18.738 [INFO][5359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.3/26] IPv6=[] ContainerID="c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" HandleID="k8s-pod-network.c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" Workload="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:18.828414 containerd[2047]: 2025-05-10 00:01:18.746 [INFO][5326] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" Namespace="calico-system" Pod="csi-node-driver-j62d4" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c2c3604-3d54-40f4-b6c5-046056f34637", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"csi-node-driver-j62d4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7409448506", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:18.828414 containerd[2047]: 2025-05-10 00:01:18.746 [INFO][5326] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.3/32] ContainerID="c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" Namespace="calico-system" Pod="csi-node-driver-j62d4" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:18.828414 containerd[2047]: 2025-05-10 00:01:18.746 [INFO][5326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7409448506 ContainerID="c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" Namespace="calico-system" Pod="csi-node-driver-j62d4" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:18.828414 containerd[2047]: 2025-05-10 00:01:18.759 [INFO][5326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" Namespace="calico-system" Pod="csi-node-driver-j62d4" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:18.828414 containerd[2047]: 2025-05-10 00:01:18.761 [INFO][5326] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" Namespace="calico-system" Pod="csi-node-driver-j62d4" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c2c3604-3d54-40f4-b6c5-046056f34637", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea", Pod:"csi-node-driver-j62d4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7409448506", MAC:"e6:5d:7f:91:d2:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:18.828414 containerd[2047]: 2025-05-10 00:01:18.797 [INFO][5326] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea" Namespace="calico-system" Pod="csi-node-driver-j62d4" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:18.906303 containerd[2047]: time="2025-05-10T00:01:18.903375297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:18.906303 containerd[2047]: time="2025-05-10T00:01:18.903503685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:18.906303 containerd[2047]: time="2025-05-10T00:01:18.903541905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:18.906303 containerd[2047]: time="2025-05-10T00:01:18.903718341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:18.921098 systemd[1]: run-netns-cni\x2d36bf57d1\x2d3cf8\x2dec8c\x2d68df\x2d3a13a31ec1e6.mount: Deactivated successfully. May 10 00:01:18.923900 systemd[1]: run-netns-cni\x2dfdc665be\x2dde19\x2dcd04\x2dbf06\x2d11b3b7f52e40.mount: Deactivated successfully. May 10 00:01:18.939408 systemd-networkd[1599]: cali0fcfcfb9902: Link UP May 10 00:01:18.943328 systemd-networkd[1599]: cali0fcfcfb9902: Gained carrier May 10 00:01:19.012060 systemd-networkd[1599]: cali5ea99bc4666: Gained IPv6LL May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.538 [INFO][5335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0 calico-apiserver-849c5f8b95- calico-apiserver 32fc493b-ac7b-4ffe-adf4-cc372f4e4150 850 0 2025-05-10 00:00:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:849c5f8b95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-45 calico-apiserver-849c5f8b95-l4kql eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0fcfcfb9902 [] []}} ContainerID="321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-l4kql" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-" May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.538 [INFO][5335] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-l4kql" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.699 [INFO][5366] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" HandleID="k8s-pod-network.321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.731 [INFO][5366] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" HandleID="k8s-pod-network.321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030f490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-45", "pod":"calico-apiserver-849c5f8b95-l4kql", "timestamp":"2025-05-10 00:01:18.699665504 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.731 [INFO][5366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.738 [INFO][5366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.738 [INFO][5366] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.744 [INFO][5366] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" host="ip-172-31-31-45" May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.764 [INFO][5366] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-45" May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.798 [INFO][5366] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.811 [INFO][5366] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.819 [INFO][5366] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.820 [INFO][5366] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" host="ip-172-31-31-45" May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.825 [INFO][5366] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865 May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.855 [INFO][5366] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" host="ip-172-31-31-45" May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.871 [INFO][5366] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.4/26] block=192.168.1.0/26 handle="k8s-pod-network.321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" host="ip-172-31-31-45" May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.872 [INFO][5366] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.4/26] handle="k8s-pod-network.321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" host="ip-172-31-31-45" May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.872 [INFO][5366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:19.024829 containerd[2047]: 2025-05-10 00:01:18.872 [INFO][5366] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.4/26] IPv6=[] ContainerID="321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" HandleID="k8s-pod-network.321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:19.029741 containerd[2047]: 2025-05-10 00:01:18.888 [INFO][5335] cni-plugin/k8s.go 386: Populated endpoint ContainerID="321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-l4kql" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0", GenerateName:"calico-apiserver-849c5f8b95-", Namespace:"calico-apiserver", SelfLink:"", UID:"32fc493b-ac7b-4ffe-adf4-cc372f4e4150", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849c5f8b95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"calico-apiserver-849c5f8b95-l4kql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0fcfcfb9902", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:19.029741 containerd[2047]: 2025-05-10 00:01:18.890 [INFO][5335] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.4/32] ContainerID="321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-l4kql" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:19.029741 containerd[2047]: 2025-05-10 00:01:18.891 [INFO][5335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fcfcfb9902 ContainerID="321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-l4kql" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:19.029741 containerd[2047]: 2025-05-10 00:01:18.948 [INFO][5335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-l4kql" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:19.029741 containerd[2047]: 2025-05-10 00:01:18.952 [INFO][5335] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-l4kql" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0", GenerateName:"calico-apiserver-849c5f8b95-", Namespace:"calico-apiserver", SelfLink:"", UID:"32fc493b-ac7b-4ffe-adf4-cc372f4e4150", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849c5f8b95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865", Pod:"calico-apiserver-849c5f8b95-l4kql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0fcfcfb9902", MAC:"7a:24:3c:92:98:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:19.029741 containerd[2047]: 2025-05-10 00:01:19.005 [INFO][5335] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865" Namespace="calico-apiserver" Pod="calico-apiserver-849c5f8b95-l4kql" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:19.076492 systemd-networkd[1599]: cali3998581dbd6: Link UP May 10 00:01:19.080547 systemd-networkd[1599]: cali3998581dbd6: Gained carrier May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.573 [INFO][5348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0 coredns-7db6d8ff4d- kube-system fb912ddc-f907-4e24-afb8-4c04709bf4e6 851 0 2025-05-10 00:00:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-45 coredns-7db6d8ff4d-jxvbb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3998581dbd6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxvbb" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-" May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.573 [INFO][5348] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxvbb" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.723 [INFO][5372] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" HandleID="k8s-pod-network.aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.763 [INFO][5372] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" HandleID="k8s-pod-network.aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000fb120), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-45", "pod":"coredns-7db6d8ff4d-jxvbb", "timestamp":"2025-05-10 00:01:18.723099272 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.766 [INFO][5372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.872 [INFO][5372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.873 [INFO][5372] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.877 [INFO][5372] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" host="ip-172-31-31-45" May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.888 [INFO][5372] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-45" May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.938 [INFO][5372] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.953 [INFO][5372] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.996 [INFO][5372] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:18.997 [INFO][5372] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" host="ip-172-31-31-45" May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:19.005 [INFO][5372] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:19.031 [INFO][5372] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" host="ip-172-31-31-45" May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:19.049 [INFO][5372] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.5/26] block=192.168.1.0/26 handle="k8s-pod-network.aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" host="ip-172-31-31-45" May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:19.051 [INFO][5372] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.5/26] handle="k8s-pod-network.aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" host="ip-172-31-31-45" May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:19.053 [INFO][5372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:19.140320 containerd[2047]: 2025-05-10 00:01:19.053 [INFO][5372] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.5/26] IPv6=[] ContainerID="aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" HandleID="k8s-pod-network.aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:19.142462 containerd[2047]: 2025-05-10 00:01:19.064 [INFO][5348] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxvbb" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb912ddc-f907-4e24-afb8-4c04709bf4e6", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"coredns-7db6d8ff4d-jxvbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3998581dbd6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:19.142462 containerd[2047]: 2025-05-10 00:01:19.066 [INFO][5348] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.5/32] ContainerID="aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxvbb" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:19.142462 containerd[2047]: 2025-05-10 00:01:19.066 [INFO][5348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3998581dbd6 ContainerID="aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxvbb" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:19.142462 containerd[2047]: 2025-05-10 00:01:19.081 [INFO][5348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxvbb" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:19.142462 containerd[2047]: 2025-05-10 00:01:19.085 [INFO][5348] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxvbb" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb912ddc-f907-4e24-afb8-4c04709bf4e6", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea", Pod:"coredns-7db6d8ff4d-jxvbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3998581dbd6", MAC:"06:0c:57:e6:ab:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:19.142462 containerd[2047]: 2025-05-10 00:01:19.120 [INFO][5348] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jxvbb" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:19.213384 kubelet[3375]: I0510 00:01:19.211601 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lgml6" podStartSLOduration=35.210544423 podStartE2EDuration="35.210544423s" podCreationTimestamp="2025-05-10 00:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:01:19.175427323 +0000 UTC m=+50.817060566" watchObservedRunningTime="2025-05-10 00:01:19.210544423 +0000 UTC m=+50.852177426" May 10 00:01:19.281177 containerd[2047]: time="2025-05-10T00:01:19.271359895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:19.281177 containerd[2047]: time="2025-05-10T00:01:19.272659123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:19.281177 containerd[2047]: time="2025-05-10T00:01:19.272996851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:19.281177 containerd[2047]: time="2025-05-10T00:01:19.273313171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:19.289739 containerd[2047]: time="2025-05-10T00:01:19.288643855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j62d4,Uid:0c2c3604-3d54-40f4-b6c5-046056f34637,Namespace:calico-system,Attempt:1,} returns sandbox id \"c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea\"" May 10 00:01:19.329338 systemd-networkd[1599]: calid0be7c0a1ed: Gained IPv6LL May 10 00:01:19.368311 containerd[2047]: time="2025-05-10T00:01:19.367573052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:19.368311 containerd[2047]: time="2025-05-10T00:01:19.367701224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:19.368311 containerd[2047]: time="2025-05-10T00:01:19.367742264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:19.368311 containerd[2047]: time="2025-05-10T00:01:19.367972328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:19.532111 containerd[2047]: time="2025-05-10T00:01:19.531793220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849c5f8b95-l4kql,Uid:32fc493b-ac7b-4ffe-adf4-cc372f4e4150,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865\"" May 10 00:01:19.541665 containerd[2047]: time="2025-05-10T00:01:19.541608140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jxvbb,Uid:fb912ddc-f907-4e24-afb8-4c04709bf4e6,Namespace:kube-system,Attempt:1,} returns sandbox id \"aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea\"" May 10 00:01:19.549254 containerd[2047]: time="2025-05-10T00:01:19.549108800Z" level=info msg="CreateContainer within sandbox \"aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:01:19.570183 containerd[2047]: time="2025-05-10T00:01:19.569734269Z" level=info msg="StopPodSandbox for \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\"" May 10 00:01:19.577954 containerd[2047]: time="2025-05-10T00:01:19.577898265Z" level=info msg="CreateContainer within sandbox \"aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"972a59e5d4ee4cb5cc20a8cbf65b75d92bbe0283c354fbf1e3e28633bbe45c95\"" May 10 00:01:19.581133 containerd[2047]: time="2025-05-10T00:01:19.579461481Z" level=info msg="StartContainer for \"972a59e5d4ee4cb5cc20a8cbf65b75d92bbe0283c354fbf1e3e28633bbe45c95\"" May 10 00:01:19.735184 containerd[2047]: time="2025-05-10T00:01:19.734518053Z" level=info msg="StartContainer for \"972a59e5d4ee4cb5cc20a8cbf65b75d92bbe0283c354fbf1e3e28633bbe45c95\" returns successfully" May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.759 [INFO][5578] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.762 [INFO][5578] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" iface="eth0" netns="/var/run/netns/cni-00ffbfc1-203e-1317-d347-fc2ee710eca9" May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.763 [INFO][5578] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" iface="eth0" netns="/var/run/netns/cni-00ffbfc1-203e-1317-d347-fc2ee710eca9" May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.765 [INFO][5578] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" iface="eth0" netns="/var/run/netns/cni-00ffbfc1-203e-1317-d347-fc2ee710eca9" May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.766 [INFO][5578] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.766 [INFO][5578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.880 [INFO][5606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" HandleID="k8s-pod-network.52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.882 [INFO][5606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.882 [INFO][5606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.921 [WARNING][5606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" HandleID="k8s-pod-network.52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.921 [INFO][5606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" HandleID="k8s-pod-network.52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.924 [INFO][5606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:19.934895 containerd[2047]: 2025-05-10 00:01:19.929 [INFO][5578] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:19.937012 containerd[2047]: time="2025-05-10T00:01:19.936943954Z" level=info msg="TearDown network for sandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\" successfully" May 10 00:01:19.937372 containerd[2047]: time="2025-05-10T00:01:19.937323406Z" level=info msg="StopPodSandbox for \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\" returns successfully" May 10 00:01:19.941221 containerd[2047]: time="2025-05-10T00:01:19.941138878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d786f5787-wbk9p,Uid:91e59d68-96e3-41cb-8fed-74ab2b45acc7,Namespace:calico-system,Attempt:1,}" May 10 00:01:19.950233 systemd[1]: run-netns-cni\x2d00ffbfc1\x2d203e\x2d1317\x2dd347\x2dfc2ee710eca9.mount: Deactivated successfully. May 10 00:01:19.960795 kubelet[3375]: I0510 00:01:19.960381 3375 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:01:19.970214 systemd-networkd[1599]: calif7409448506: Gained IPv6LL May 10 00:01:20.227156 systemd-networkd[1599]: cali0fcfcfb9902: Gained IPv6LL May 10 00:01:20.238731 kubelet[3375]: I0510 00:01:20.238603 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jxvbb" podStartSLOduration=36.238577732 podStartE2EDuration="36.238577732s" podCreationTimestamp="2025-05-10 00:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:01:20.2030585 +0000 UTC m=+51.844691515" watchObservedRunningTime="2025-05-10 00:01:20.238577732 +0000 UTC m=+51.880210771" May 10 00:01:20.556152 systemd-networkd[1599]: cali5dc9df6feb7: Link UP May 10 00:01:20.562162 systemd-networkd[1599]: cali5dc9df6feb7: Gained carrier May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.193 [INFO][5621] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0 calico-kube-controllers-7d786f5787- calico-system 91e59d68-96e3-41cb-8fed-74ab2b45acc7 884 0 2025-05-10 00:00:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d786f5787 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-45 calico-kube-controllers-7d786f5787-wbk9p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5dc9df6feb7 [] []}} ContainerID="3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" Namespace="calico-system" Pod="calico-kube-controllers-7d786f5787-wbk9p" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-" May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.194 [INFO][5621] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" Namespace="calico-system" Pod="calico-kube-controllers-7d786f5787-wbk9p" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.418 [INFO][5655] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" HandleID="k8s-pod-network.3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.445 [INFO][5655] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" HandleID="k8s-pod-network.3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034cd70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-45", "pod":"calico-kube-controllers-7d786f5787-wbk9p", "timestamp":"2025-05-10 00:01:20.418354869 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.446 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.446 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.447 [INFO][5655] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.452 [INFO][5655] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" host="ip-172-31-31-45" May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.470 [INFO][5655] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-45" May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.486 [INFO][5655] ipam/ipam.go 489: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.491 [INFO][5655] ipam/ipam.go 155: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.499 [INFO][5655] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.499 [INFO][5655] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" host="ip-172-31-31-45" May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.504 [INFO][5655] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.514 [INFO][5655] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" host="ip-172-31-31-45" May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.535 [INFO][5655] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.1.6/26] block=192.168.1.0/26 handle="k8s-pod-network.3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" host="ip-172-31-31-45" May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.537 [INFO][5655] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.1.6/26] handle="k8s-pod-network.3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" host="ip-172-31-31-45" May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.537 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:20.625639 containerd[2047]: 2025-05-10 00:01:20.537 [INFO][5655] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.6/26] IPv6=[] ContainerID="3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" HandleID="k8s-pod-network.3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:20.626886 containerd[2047]: 2025-05-10 00:01:20.546 [INFO][5621] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" Namespace="calico-system" Pod="calico-kube-controllers-7d786f5787-wbk9p" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0", GenerateName:"calico-kube-controllers-7d786f5787-", Namespace:"calico-system", SelfLink:"", UID:"91e59d68-96e3-41cb-8fed-74ab2b45acc7", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d786f5787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"calico-kube-controllers-7d786f5787-wbk9p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5dc9df6feb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:20.626886 containerd[2047]: 2025-05-10 00:01:20.546 [INFO][5621] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.1.6/32] ContainerID="3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" Namespace="calico-system" Pod="calico-kube-controllers-7d786f5787-wbk9p" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:20.626886 containerd[2047]: 2025-05-10 00:01:20.546 [INFO][5621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5dc9df6feb7 ContainerID="3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" Namespace="calico-system" Pod="calico-kube-controllers-7d786f5787-wbk9p" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:20.626886 containerd[2047]: 2025-05-10 00:01:20.558 [INFO][5621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" Namespace="calico-system" Pod="calico-kube-controllers-7d786f5787-wbk9p" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:20.626886 containerd[2047]: 2025-05-10 00:01:20.565 [INFO][5621] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" Namespace="calico-system" Pod="calico-kube-controllers-7d786f5787-wbk9p" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0", GenerateName:"calico-kube-controllers-7d786f5787-", Namespace:"calico-system", SelfLink:"", UID:"91e59d68-96e3-41cb-8fed-74ab2b45acc7", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d786f5787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f", Pod:"calico-kube-controllers-7d786f5787-wbk9p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5dc9df6feb7", MAC:"da:53:08:bd:13:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:20.626886 containerd[2047]: 2025-05-10 00:01:20.604 [INFO][5621] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f" Namespace="calico-system" Pod="calico-kube-controllers-7d786f5787-wbk9p" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:20.691486 containerd[2047]: time="2025-05-10T00:01:20.691259074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:01:20.691486 containerd[2047]: time="2025-05-10T00:01:20.691390870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:01:20.691486 containerd[2047]: time="2025-05-10T00:01:20.691418206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:20.692092 containerd[2047]: time="2025-05-10T00:01:20.691849810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:01:20.737246 systemd-networkd[1599]: cali3998581dbd6: Gained IPv6LL May 10 00:01:20.797963 containerd[2047]: time="2025-05-10T00:01:20.797897363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d786f5787-wbk9p,Uid:91e59d68-96e3-41cb-8fed-74ab2b45acc7,Namespace:calico-system,Attempt:1,} returns sandbox id \"3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f\"" May 10 00:01:20.911258 systemd[1]: run-containerd-runc-k8s.io-39a75ec1e9735ed89f87fe61e0547457128a4ba68eae33d529017dc1efb22466-runc.BqoTUs.mount: Deactivated successfully. May 10 00:01:21.633023 containerd[2047]: time="2025-05-10T00:01:21.632836283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:21.636824 containerd[2047]: time="2025-05-10T00:01:21.636617495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 10 00:01:21.640095 containerd[2047]: time="2025-05-10T00:01:21.639960899Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:21.649448 containerd[2047]: time="2025-05-10T00:01:21.648917279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:21.653347 containerd[2047]: time="2025-05-10T00:01:21.651763967Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 3.390195041s" May 10 00:01:21.653347 containerd[2047]: time="2025-05-10T00:01:21.651856139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:01:21.657458 containerd[2047]: time="2025-05-10T00:01:21.656114591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 10 00:01:21.659445 containerd[2047]: time="2025-05-10T00:01:21.659239523Z" level=info msg="CreateContainer within sandbox \"f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:01:21.691493 containerd[2047]: time="2025-05-10T00:01:21.691144715Z" level=info msg="CreateContainer within sandbox \"f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"95bfc949ea3ddd4f3dfdea2b808f6845bd556526b2e1cedc4e3925e0e7fff789\"" May 10 00:01:21.695477 containerd[2047]: time="2025-05-10T00:01:21.694470431Z" level=info msg="StartContainer for \"95bfc949ea3ddd4f3dfdea2b808f6845bd556526b2e1cedc4e3925e0e7fff789\"" May 10 00:01:21.928325 containerd[2047]: time="2025-05-10T00:01:21.928152912Z" level=info msg="StartContainer for \"95bfc949ea3ddd4f3dfdea2b808f6845bd556526b2e1cedc4e3925e0e7fff789\" returns successfully" May 10 00:01:22.209018 systemd[1]: Started sshd@8-172.31.31.45:22-147.75.109.163:47002.service - OpenSSH per-connection server daemon (147.75.109.163:47002). May 10 00:01:22.423520 sshd[5788]: Accepted publickey for core from 147.75.109.163 port 47002 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:22.435740 sshd[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:22.463654 systemd-logind[2020]: New session 9 of user core. May 10 00:01:22.466588 systemd-networkd[1599]: cali5dc9df6feb7: Gained IPv6LL May 10 00:01:22.475252 systemd[1]: Started session-9.scope - Session 9 of User core. May 10 00:01:22.797034 sshd[5788]: pam_unix(sshd:session): session closed for user core May 10 00:01:22.807751 systemd-logind[2020]: Session 9 logged out. Waiting for processes to exit. May 10 00:01:22.808051 systemd[1]: sshd@8-172.31.31.45:22-147.75.109.163:47002.service: Deactivated successfully. May 10 00:01:22.816242 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:01:22.822645 systemd-logind[2020]: Removed session 9. May 10 00:01:23.205093 containerd[2047]: time="2025-05-10T00:01:23.204932759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:23.207545 containerd[2047]: time="2025-05-10T00:01:23.207490151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 10 00:01:23.211298 containerd[2047]: time="2025-05-10T00:01:23.209938535Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:23.216244 containerd[2047]: time="2025-05-10T00:01:23.216192755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:23.218094 containerd[2047]: time="2025-05-10T00:01:23.217389455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.561202372s" May 10 00:01:23.218284 containerd[2047]: time="2025-05-10T00:01:23.218237291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 10 00:01:23.221203 containerd[2047]: time="2025-05-10T00:01:23.221148815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:01:23.226977 containerd[2047]: time="2025-05-10T00:01:23.226905011Z" level=info msg="CreateContainer within sandbox \"c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 10 00:01:23.284671 containerd[2047]: time="2025-05-10T00:01:23.284591447Z" level=info msg="CreateContainer within sandbox \"c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7da6303ede9b90f71a9a6c33eae679baf2b7a8756309d1abd9e5e0736096da87\"" May 10 00:01:23.286019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2072640661.mount: Deactivated successfully. May 10 00:01:23.288732 containerd[2047]: time="2025-05-10T00:01:23.288510827Z" level=info msg="StartContainer for \"7da6303ede9b90f71a9a6c33eae679baf2b7a8756309d1abd9e5e0736096da87\"" May 10 00:01:23.609390 containerd[2047]: time="2025-05-10T00:01:23.609315925Z" level=info msg="StartContainer for \"7da6303ede9b90f71a9a6c33eae679baf2b7a8756309d1abd9e5e0736096da87\" returns successfully" May 10 00:01:23.613228 containerd[2047]: time="2025-05-10T00:01:23.609633037Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:23.618628 containerd[2047]: time="2025-05-10T00:01:23.618402385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 10 00:01:23.642951 containerd[2047]: time="2025-05-10T00:01:23.642419005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 420.906146ms" May 10 00:01:23.642951 containerd[2047]: time="2025-05-10T00:01:23.642489253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:01:23.646052 containerd[2047]: time="2025-05-10T00:01:23.645608677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 10 00:01:23.650005 containerd[2047]: time="2025-05-10T00:01:23.649256905Z" level=info msg="CreateContainer within sandbox \"321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:01:23.679778 containerd[2047]: time="2025-05-10T00:01:23.679619725Z" level=info msg="CreateContainer within sandbox \"321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8f0f5ed8f7cba3e9092e022d045734f4f10eb6a58c8894b8eb3930e512cb53d2\"" May 10 00:01:23.683119 containerd[2047]: time="2025-05-10T00:01:23.682354873Z" level=info msg="StartContainer for \"8f0f5ed8f7cba3e9092e022d045734f4f10eb6a58c8894b8eb3930e512cb53d2\"" May 10 00:01:23.831160 containerd[2047]: time="2025-05-10T00:01:23.831092102Z" level=info msg="StartContainer for \"8f0f5ed8f7cba3e9092e022d045734f4f10eb6a58c8894b8eb3930e512cb53d2\" returns successfully" May 10 00:01:24.228523 kubelet[3375]: I0510 00:01:24.228483 3375 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:01:24.270671 kubelet[3375]: I0510 00:01:24.267609 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-849c5f8b95-f5vqw" podStartSLOduration=29.867513659 podStartE2EDuration="33.267585396s" podCreationTimestamp="2025-05-10 00:00:51 +0000 UTC" firstStartedPulling="2025-05-10 00:01:18.25493541 +0000 UTC m=+49.896568401" lastFinishedPulling="2025-05-10 00:01:21.655007075 +0000 UTC m=+53.296640138" observedRunningTime="2025-05-10 00:01:22.230561206 +0000 UTC m=+53.872194293" watchObservedRunningTime="2025-05-10 00:01:24.267585396 +0000 UTC m=+55.909218399" May 10 00:01:24.270671 kubelet[3375]: I0510 00:01:24.267808 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-849c5f8b95-l4kql" podStartSLOduration=29.157926127 podStartE2EDuration="33.267779088s" podCreationTimestamp="2025-05-10 00:00:51 +0000 UTC" firstStartedPulling="2025-05-10 00:01:19.534384992 +0000 UTC m=+51.176017971" lastFinishedPulling="2025-05-10 00:01:23.644237953 +0000 UTC m=+55.285870932" observedRunningTime="2025-05-10 00:01:24.263884476 +0000 UTC m=+55.905517587" watchObservedRunningTime="2025-05-10 00:01:24.267779088 +0000 UTC m=+55.909412091" May 10 00:01:24.770700 ntpd[2003]: Listen normally on 8 cali5ea99bc4666 [fe80::ecee:eeff:feee:eeee%7]:123 May 10 00:01:24.775026 ntpd[2003]: 10 May 00:01:24 ntpd[2003]: Listen normally on 8 cali5ea99bc4666 [fe80::ecee:eeff:feee:eeee%7]:123 May 10 00:01:24.775026 ntpd[2003]: 10 May 00:01:24 ntpd[2003]: Listen normally on 9 calid0be7c0a1ed [fe80::ecee:eeff:feee:eeee%8]:123 May 10 00:01:24.775026 ntpd[2003]: 10 May 00:01:24 ntpd[2003]: Listen normally on 10 calif7409448506 [fe80::ecee:eeff:feee:eeee%9]:123 May 10 00:01:24.775026 ntpd[2003]: 10 May 00:01:24 ntpd[2003]: Listen normally on 11 cali0fcfcfb9902 [fe80::ecee:eeff:feee:eeee%10]:123 May 10 00:01:24.775026 ntpd[2003]: 10 May 00:01:24 ntpd[2003]: Listen normally on 12 cali3998581dbd6 [fe80::ecee:eeff:feee:eeee%11]:123 May 10 00:01:24.775026 ntpd[2003]: 10 May 00:01:24 ntpd[2003]: Listen normally on 13 cali5dc9df6feb7 [fe80::ecee:eeff:feee:eeee%12]:123 May 10 00:01:24.771449 ntpd[2003]: Listen normally on 9 calid0be7c0a1ed [fe80::ecee:eeff:feee:eeee%8]:123 May 10 00:01:24.771522 ntpd[2003]: Listen normally on 10 calif7409448506 [fe80::ecee:eeff:feee:eeee%9]:123 May 10 00:01:24.771590 ntpd[2003]: Listen normally on 11 cali0fcfcfb9902 [fe80::ecee:eeff:feee:eeee%10]:123 May 10 00:01:24.771664 ntpd[2003]: Listen normally on 12 cali3998581dbd6 [fe80::ecee:eeff:feee:eeee%11]:123 May 10 00:01:24.771730 ntpd[2003]: Listen normally on 13 cali5dc9df6feb7 [fe80::ecee:eeff:feee:eeee%12]:123 May 10 00:01:25.234976 kubelet[3375]: I0510 00:01:25.232679 3375 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:01:26.767214 containerd[2047]: time="2025-05-10T00:01:26.766968712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:26.769433 containerd[2047]: time="2025-05-10T00:01:26.769222000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 10 00:01:26.771409 containerd[2047]: time="2025-05-10T00:01:26.771246412Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:26.779146 containerd[2047]: time="2025-05-10T00:01:26.778603324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:26.781426 containerd[2047]: time="2025-05-10T00:01:26.781366696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 3.135699651s" May 10 00:01:26.781681 containerd[2047]: time="2025-05-10T00:01:26.781609276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 10 00:01:26.785584 containerd[2047]: time="2025-05-10T00:01:26.785179528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 10 00:01:26.818240 containerd[2047]: time="2025-05-10T00:01:26.818173469Z" level=info msg="CreateContainer within sandbox \"3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 10 00:01:26.875581 containerd[2047]: time="2025-05-10T00:01:26.875480969Z" level=info msg="CreateContainer within sandbox \"3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"529c0d3cc7c487c0bb9216de457a1485271e0085f503bc6bcb10bc71626ee56b\"" May 10 00:01:26.879315 containerd[2047]: time="2025-05-10T00:01:26.877810961Z" level=info msg="StartContainer for \"529c0d3cc7c487c0bb9216de457a1485271e0085f503bc6bcb10bc71626ee56b\"" May 10 00:01:27.031655 containerd[2047]: time="2025-05-10T00:01:27.031474154Z" level=info msg="StartContainer for \"529c0d3cc7c487c0bb9216de457a1485271e0085f503bc6bcb10bc71626ee56b\" returns successfully" May 10 00:01:27.276461 kubelet[3375]: I0510 00:01:27.275844 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d786f5787-wbk9p" podStartSLOduration=29.294415726 podStartE2EDuration="35.275820843s" podCreationTimestamp="2025-05-10 00:00:52 +0000 UTC" firstStartedPulling="2025-05-10 00:01:20.802650959 +0000 UTC m=+52.444283950" lastFinishedPulling="2025-05-10 00:01:26.784056088 +0000 UTC m=+58.425689067" observedRunningTime="2025-05-10 00:01:27.274582131 +0000 UTC m=+58.916215158" watchObservedRunningTime="2025-05-10 00:01:27.275820843 +0000 UTC m=+58.917453870" May 10 00:01:27.829863 systemd[1]: Started sshd@9-172.31.31.45:22-147.75.109.163:36228.service - OpenSSH per-connection server daemon (147.75.109.163:36228). May 10 00:01:28.025427 sshd[5952]: Accepted publickey for core from 147.75.109.163 port 36228 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:28.029519 sshd[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:28.040558 systemd-logind[2020]: New session 10 of user core. May 10 00:01:28.047730 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 00:01:28.375900 sshd[5952]: pam_unix(sshd:session): session closed for user core May 10 00:01:28.399887 systemd[1]: sshd@9-172.31.31.45:22-147.75.109.163:36228.service: Deactivated successfully. May 10 00:01:28.410939 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:01:28.419650 systemd-logind[2020]: Session 10 logged out. Waiting for processes to exit. May 10 00:01:28.431893 systemd[1]: Started sshd@10-172.31.31.45:22-147.75.109.163:36240.service - OpenSSH per-connection server daemon (147.75.109.163:36240). May 10 00:01:28.439398 systemd-logind[2020]: Removed session 10. May 10 00:01:28.508296 containerd[2047]: time="2025-05-10T00:01:28.508193369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:28.510285 containerd[2047]: time="2025-05-10T00:01:28.510197549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 10 00:01:28.512620 containerd[2047]: time="2025-05-10T00:01:28.512538929Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:28.517723 containerd[2047]: time="2025-05-10T00:01:28.517613837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:01:28.519563 containerd[2047]: time="2025-05-10T00:01:28.519092993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.733847525s" May 10 00:01:28.519563 containerd[2047]: time="2025-05-10T00:01:28.519158081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 10 00:01:28.527248 containerd[2047]: time="2025-05-10T00:01:28.527030453Z" level=info msg="CreateContainer within sandbox \"c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 10 00:01:28.570373 containerd[2047]: time="2025-05-10T00:01:28.567967469Z" level=info msg="CreateContainer within sandbox \"c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f126a3da49a84720fcb2d59177aa448bc6951cf2e92870386c58b9b96b1c846c\"" May 10 00:01:28.571401 containerd[2047]: time="2025-05-10T00:01:28.571333445Z" level=info msg="StartContainer for \"f126a3da49a84720fcb2d59177aa448bc6951cf2e92870386c58b9b96b1c846c\"" May 10 00:01:28.625589 containerd[2047]: time="2025-05-10T00:01:28.625409466Z" level=info msg="StopPodSandbox for \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\"" May 10 00:01:28.647469 sshd[5971]: Accepted publickey for core from 147.75.109.163 port 36240 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:28.660804 sshd[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:28.684252 systemd-logind[2020]: New session 11 of user core. May 10 00:01:28.692073 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 00:01:28.777693 containerd[2047]: time="2025-05-10T00:01:28.777502350Z" level=info msg="StartContainer for \"f126a3da49a84720fcb2d59177aa448bc6951cf2e92870386c58b9b96b1c846c\" returns successfully" May 10 00:01:28.862902 kubelet[3375]: I0510 00:01:28.862822 3375 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 10 00:01:28.862902 kubelet[3375]: I0510 00:01:28.862903 3375 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 10 00:01:28.948545 containerd[2047]: 2025-05-10 00:01:28.760 [WARNING][6003] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0", GenerateName:"calico-apiserver-849c5f8b95-", Namespace:"calico-apiserver", SelfLink:"", UID:"db502810-c7ca-4ff1-88e0-b13b0b451d63", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849c5f8b95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56", Pod:"calico-apiserver-849c5f8b95-f5vqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0be7c0a1ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:28.948545 containerd[2047]: 2025-05-10 00:01:28.761 [INFO][6003] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:28.948545 containerd[2047]: 2025-05-10 00:01:28.761 [INFO][6003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" iface="eth0" netns="" May 10 00:01:28.948545 containerd[2047]: 2025-05-10 00:01:28.761 [INFO][6003] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:28.948545 containerd[2047]: 2025-05-10 00:01:28.761 [INFO][6003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:28.948545 containerd[2047]: 2025-05-10 00:01:28.907 [INFO][6027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" HandleID="k8s-pod-network.dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:28.948545 containerd[2047]: 2025-05-10 00:01:28.907 [INFO][6027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:28.948545 containerd[2047]: 2025-05-10 00:01:28.908 [INFO][6027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:28.948545 containerd[2047]: 2025-05-10 00:01:28.931 [WARNING][6027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" HandleID="k8s-pod-network.dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:28.948545 containerd[2047]: 2025-05-10 00:01:28.931 [INFO][6027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" HandleID="k8s-pod-network.dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:28.948545 containerd[2047]: 2025-05-10 00:01:28.939 [INFO][6027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:28.948545 containerd[2047]: 2025-05-10 00:01:28.943 [INFO][6003] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:28.948545 containerd[2047]: time="2025-05-10T00:01:28.947889967Z" level=info msg="TearDown network for sandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\" successfully" May 10 00:01:28.948545 containerd[2047]: time="2025-05-10T00:01:28.947942599Z" level=info msg="StopPodSandbox for \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\" returns successfully" May 10 00:01:28.950519 containerd[2047]: time="2025-05-10T00:01:28.949971295Z" level=info msg="RemovePodSandbox for \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\"" May 10 00:01:28.950519 containerd[2047]: time="2025-05-10T00:01:28.950041219Z" level=info msg="Forcibly stopping sandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\"" May 10 00:01:29.168140 containerd[2047]: 2025-05-10 00:01:29.089 [WARNING][6059] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0", GenerateName:"calico-apiserver-849c5f8b95-", Namespace:"calico-apiserver", SelfLink:"", UID:"db502810-c7ca-4ff1-88e0-b13b0b451d63", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849c5f8b95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"f5ee8b97f4190eeec434222fad6ee0b2aa55bfb792d85ef7c4c25da752866a56", Pod:"calico-apiserver-849c5f8b95-f5vqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0be7c0a1ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:29.168140 containerd[2047]: 2025-05-10 00:01:29.090 [INFO][6059] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:29.168140 containerd[2047]: 2025-05-10 00:01:29.090 [INFO][6059] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" iface="eth0" netns="" May 10 00:01:29.168140 containerd[2047]: 2025-05-10 00:01:29.090 [INFO][6059] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:29.168140 containerd[2047]: 2025-05-10 00:01:29.091 [INFO][6059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:29.168140 containerd[2047]: 2025-05-10 00:01:29.147 [INFO][6066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" HandleID="k8s-pod-network.dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:29.168140 containerd[2047]: 2025-05-10 00:01:29.148 [INFO][6066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:29.168140 containerd[2047]: 2025-05-10 00:01:29.148 [INFO][6066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:29.168140 containerd[2047]: 2025-05-10 00:01:29.160 [WARNING][6066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" HandleID="k8s-pod-network.dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:29.168140 containerd[2047]: 2025-05-10 00:01:29.160 [INFO][6066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" HandleID="k8s-pod-network.dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--f5vqw-eth0" May 10 00:01:29.168140 containerd[2047]: 2025-05-10 00:01:29.162 [INFO][6066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:29.168140 containerd[2047]: 2025-05-10 00:01:29.165 [INFO][6059] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a" May 10 00:01:29.171987 containerd[2047]: time="2025-05-10T00:01:29.170035216Z" level=info msg="TearDown network for sandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\" successfully" May 10 00:01:29.183854 containerd[2047]: time="2025-05-10T00:01:29.183505528Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:01:29.183854 containerd[2047]: time="2025-05-10T00:01:29.183622816Z" level=info msg="RemovePodSandbox \"dd175450dde2e7ce5f8611ec7e8cc1dc01f577b6311d1672e5fa14e5cb64de4a\" returns successfully" May 10 00:01:29.185920 containerd[2047]: time="2025-05-10T00:01:29.185422444Z" level=info msg="StopPodSandbox for \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\"" May 10 00:01:29.200088 sshd[5971]: pam_unix(sshd:session): session closed for user core May 10 00:01:29.228583 systemd-logind[2020]: Session 11 logged out. Waiting for processes to exit. May 10 00:01:29.229478 systemd[1]: sshd@10-172.31.31.45:22-147.75.109.163:36240.service: Deactivated successfully. May 10 00:01:29.247404 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:01:29.273850 systemd[1]: Started sshd@11-172.31.31.45:22-147.75.109.163:36256.service - OpenSSH per-connection server daemon (147.75.109.163:36256). May 10 00:01:29.280729 systemd-logind[2020]: Removed session 11. May 10 00:01:29.454784 containerd[2047]: 2025-05-10 00:01:29.390 [WARNING][6083] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0", GenerateName:"calico-apiserver-849c5f8b95-", Namespace:"calico-apiserver", SelfLink:"", UID:"32fc493b-ac7b-4ffe-adf4-cc372f4e4150", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849c5f8b95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865", Pod:"calico-apiserver-849c5f8b95-l4kql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0fcfcfb9902", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:29.454784 containerd[2047]: 2025-05-10 00:01:29.391 [INFO][6083] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:29.454784 containerd[2047]: 2025-05-10 00:01:29.393 [INFO][6083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" iface="eth0" netns="" May 10 00:01:29.454784 containerd[2047]: 2025-05-10 00:01:29.393 [INFO][6083] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:29.454784 containerd[2047]: 2025-05-10 00:01:29.393 [INFO][6083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:29.454784 containerd[2047]: 2025-05-10 00:01:29.432 [INFO][6096] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" HandleID="k8s-pod-network.241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:29.454784 containerd[2047]: 2025-05-10 00:01:29.432 [INFO][6096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:29.454784 containerd[2047]: 2025-05-10 00:01:29.432 [INFO][6096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:29.454784 containerd[2047]: 2025-05-10 00:01:29.445 [WARNING][6096] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" HandleID="k8s-pod-network.241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:29.454784 containerd[2047]: 2025-05-10 00:01:29.445 [INFO][6096] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" HandleID="k8s-pod-network.241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:29.454784 containerd[2047]: 2025-05-10 00:01:29.447 [INFO][6096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:29.454784 containerd[2047]: 2025-05-10 00:01:29.451 [INFO][6083] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:29.456472 containerd[2047]: time="2025-05-10T00:01:29.454835970Z" level=info msg="TearDown network for sandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\" successfully" May 10 00:01:29.456472 containerd[2047]: time="2025-05-10T00:01:29.454876602Z" level=info msg="StopPodSandbox for \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\" returns successfully" May 10 00:01:29.456472 containerd[2047]: time="2025-05-10T00:01:29.455526738Z" level=info msg="RemovePodSandbox for \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\"" May 10 00:01:29.456472 containerd[2047]: time="2025-05-10T00:01:29.455576166Z" level=info msg="Forcibly stopping sandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\"" May 10 00:01:29.498221 sshd[6090]: Accepted publickey for core from 147.75.109.163 port 36256 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:29.502102 sshd[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:29.521383 systemd-logind[2020]: New session 12 of user core. May 10 00:01:29.529401 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 00:01:29.783416 containerd[2047]: 2025-05-10 00:01:29.545 [WARNING][6114] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0", GenerateName:"calico-apiserver-849c5f8b95-", Namespace:"calico-apiserver", SelfLink:"", UID:"32fc493b-ac7b-4ffe-adf4-cc372f4e4150", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849c5f8b95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"321126037e7ae6e2585c2fab4aec2b062ab6de5ea7ccb5779c8fa9a87210c865", Pod:"calico-apiserver-849c5f8b95-l4kql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0fcfcfb9902", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:29.783416 containerd[2047]: 2025-05-10 00:01:29.546 [INFO][6114] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:29.783416 containerd[2047]: 2025-05-10 00:01:29.546 [INFO][6114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" iface="eth0" netns="" May 10 00:01:29.783416 containerd[2047]: 2025-05-10 00:01:29.546 [INFO][6114] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:29.783416 containerd[2047]: 2025-05-10 00:01:29.546 [INFO][6114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:29.783416 containerd[2047]: 2025-05-10 00:01:29.713 [INFO][6122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" HandleID="k8s-pod-network.241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:29.783416 containerd[2047]: 2025-05-10 00:01:29.714 [INFO][6122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:29.783416 containerd[2047]: 2025-05-10 00:01:29.714 [INFO][6122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:29.783416 containerd[2047]: 2025-05-10 00:01:29.746 [WARNING][6122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" HandleID="k8s-pod-network.241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:29.783416 containerd[2047]: 2025-05-10 00:01:29.746 [INFO][6122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" HandleID="k8s-pod-network.241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" Workload="ip--172--31--31--45-k8s-calico--apiserver--849c5f8b95--l4kql-eth0" May 10 00:01:29.783416 containerd[2047]: 2025-05-10 00:01:29.754 [INFO][6122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:29.783416 containerd[2047]: 2025-05-10 00:01:29.767 [INFO][6114] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8" May 10 00:01:29.786774 containerd[2047]: time="2025-05-10T00:01:29.783848575Z" level=info msg="TearDown network for sandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\" successfully" May 10 00:01:29.815771 containerd[2047]: time="2025-05-10T00:01:29.811728019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:01:29.815771 containerd[2047]: time="2025-05-10T00:01:29.811850983Z" level=info msg="RemovePodSandbox \"241cca6cd4d566a71f49e6755dd31febe66d4c65c378403c96e6df69b9bdd3d8\" returns successfully" May 10 00:01:29.815771 containerd[2047]: time="2025-05-10T00:01:29.813126763Z" level=info msg="StopPodSandbox for \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\"" May 10 00:01:29.961928 sshd[6090]: pam_unix(sshd:session): session closed for user core May 10 00:01:29.975179 systemd[1]: sshd@11-172.31.31.45:22-147.75.109.163:36256.service: Deactivated successfully. May 10 00:01:29.989994 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:01:29.992738 systemd-logind[2020]: Session 12 logged out. Waiting for processes to exit. May 10 00:01:29.995532 systemd-logind[2020]: Removed session 12. May 10 00:01:30.066178 containerd[2047]: 2025-05-10 00:01:29.956 [WARNING][6149] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0", GenerateName:"calico-kube-controllers-7d786f5787-", Namespace:"calico-system", SelfLink:"", UID:"91e59d68-96e3-41cb-8fed-74ab2b45acc7", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d786f5787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f", Pod:"calico-kube-controllers-7d786f5787-wbk9p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5dc9df6feb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:30.066178 containerd[2047]: 2025-05-10 00:01:29.957 [INFO][6149] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:30.066178 containerd[2047]: 2025-05-10 00:01:29.957 [INFO][6149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" iface="eth0" netns="" May 10 00:01:30.066178 containerd[2047]: 2025-05-10 00:01:29.957 [INFO][6149] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:30.066178 containerd[2047]: 2025-05-10 00:01:29.957 [INFO][6149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:30.066178 containerd[2047]: 2025-05-10 00:01:30.028 [INFO][6157] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" HandleID="k8s-pod-network.52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:30.066178 containerd[2047]: 2025-05-10 00:01:30.033 [INFO][6157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:30.066178 containerd[2047]: 2025-05-10 00:01:30.033 [INFO][6157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:30.066178 containerd[2047]: 2025-05-10 00:01:30.055 [WARNING][6157] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" HandleID="k8s-pod-network.52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:30.066178 containerd[2047]: 2025-05-10 00:01:30.055 [INFO][6157] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" HandleID="k8s-pod-network.52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:30.066178 containerd[2047]: 2025-05-10 00:01:30.058 [INFO][6157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:30.066178 containerd[2047]: 2025-05-10 00:01:30.062 [INFO][6149] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:30.066178 containerd[2047]: time="2025-05-10T00:01:30.066033461Z" level=info msg="TearDown network for sandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\" successfully" May 10 00:01:30.066178 containerd[2047]: time="2025-05-10T00:01:30.066070337Z" level=info msg="StopPodSandbox for \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\" returns successfully" May 10 00:01:30.069406 containerd[2047]: time="2025-05-10T00:01:30.069233069Z" level=info msg="RemovePodSandbox for \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\"" May 10 00:01:30.069406 containerd[2047]: time="2025-05-10T00:01:30.069358037Z" level=info msg="Forcibly stopping sandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\"" May 10 00:01:30.243437 containerd[2047]: 2025-05-10 00:01:30.177 [WARNING][6178] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0", GenerateName:"calico-kube-controllers-7d786f5787-", Namespace:"calico-system", SelfLink:"", UID:"91e59d68-96e3-41cb-8fed-74ab2b45acc7", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d786f5787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"3f180033e342ac4828076d10d211d6888b20b17cbc5ac6d20023e4831e76a66f", Pod:"calico-kube-controllers-7d786f5787-wbk9p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5dc9df6feb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:30.243437 containerd[2047]: 2025-05-10 00:01:30.177 [INFO][6178] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:30.243437 containerd[2047]: 2025-05-10 00:01:30.177 [INFO][6178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" iface="eth0" netns="" May 10 00:01:30.243437 containerd[2047]: 2025-05-10 00:01:30.177 [INFO][6178] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:30.243437 containerd[2047]: 2025-05-10 00:01:30.177 [INFO][6178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:30.243437 containerd[2047]: 2025-05-10 00:01:30.222 [INFO][6186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" HandleID="k8s-pod-network.52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:30.243437 containerd[2047]: 2025-05-10 00:01:30.222 [INFO][6186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:30.243437 containerd[2047]: 2025-05-10 00:01:30.222 [INFO][6186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:30.243437 containerd[2047]: 2025-05-10 00:01:30.235 [WARNING][6186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" HandleID="k8s-pod-network.52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:30.243437 containerd[2047]: 2025-05-10 00:01:30.235 [INFO][6186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" HandleID="k8s-pod-network.52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--7d786f5787--wbk9p-eth0" May 10 00:01:30.243437 containerd[2047]: 2025-05-10 00:01:30.237 [INFO][6186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:30.243437 containerd[2047]: 2025-05-10 00:01:30.240 [INFO][6178] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a" May 10 00:01:30.244903 containerd[2047]: time="2025-05-10T00:01:30.244112226Z" level=info msg="TearDown network for sandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\" successfully" May 10 00:01:30.252066 containerd[2047]: time="2025-05-10T00:01:30.251955630Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:01:30.252066 containerd[2047]: time="2025-05-10T00:01:30.252053802Z" level=info msg="RemovePodSandbox \"52dd5835a8718c533d18ccb9599400e7a9f6f8877c729481085d17264662132a\" returns successfully" May 10 00:01:30.252734 containerd[2047]: time="2025-05-10T00:01:30.252683514Z" level=info msg="StopPodSandbox for \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\"" May 10 00:01:30.383148 containerd[2047]: 2025-05-10 00:01:30.320 [WARNING][6204] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c2c3604-3d54-40f4-b6c5-046056f34637", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea", Pod:"csi-node-driver-j62d4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7409448506", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:30.383148 containerd[2047]: 2025-05-10 00:01:30.320 [INFO][6204] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:30.383148 containerd[2047]: 2025-05-10 00:01:30.320 [INFO][6204] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" iface="eth0" netns="" May 10 00:01:30.383148 containerd[2047]: 2025-05-10 00:01:30.320 [INFO][6204] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:30.383148 containerd[2047]: 2025-05-10 00:01:30.320 [INFO][6204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:30.383148 containerd[2047]: 2025-05-10 00:01:30.360 [INFO][6211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" HandleID="k8s-pod-network.d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" Workload="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:30.383148 containerd[2047]: 2025-05-10 00:01:30.360 [INFO][6211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:30.383148 containerd[2047]: 2025-05-10 00:01:30.361 [INFO][6211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:30.383148 containerd[2047]: 2025-05-10 00:01:30.374 [WARNING][6211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" HandleID="k8s-pod-network.d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" Workload="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:30.383148 containerd[2047]: 2025-05-10 00:01:30.375 [INFO][6211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" HandleID="k8s-pod-network.d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" Workload="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:30.383148 containerd[2047]: 2025-05-10 00:01:30.377 [INFO][6211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:30.383148 containerd[2047]: 2025-05-10 00:01:30.380 [INFO][6204] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:30.384020 containerd[2047]: time="2025-05-10T00:01:30.383146650Z" level=info msg="TearDown network for sandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\" successfully" May 10 00:01:30.384020 containerd[2047]: time="2025-05-10T00:01:30.383209350Z" level=info msg="StopPodSandbox for \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\" returns successfully" May 10 00:01:30.385394 containerd[2047]: time="2025-05-10T00:01:30.384403758Z" level=info msg="RemovePodSandbox for \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\"" May 10 00:01:30.385394 containerd[2047]: time="2025-05-10T00:01:30.384455922Z" level=info msg="Forcibly stopping sandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\"" May 10 00:01:30.517048 containerd[2047]: 2025-05-10 00:01:30.454 [WARNING][6229] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c2c3604-3d54-40f4-b6c5-046056f34637", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"c53ad00ec6eebdbebd9d1d83ec8908999e650004d1bb6dccbf6b73c45b4af8ea", Pod:"csi-node-driver-j62d4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7409448506", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:30.517048 containerd[2047]: 2025-05-10 00:01:30.454 [INFO][6229] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:30.517048 containerd[2047]: 2025-05-10 00:01:30.454 [INFO][6229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" iface="eth0" netns="" May 10 00:01:30.517048 containerd[2047]: 2025-05-10 00:01:30.455 [INFO][6229] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:30.517048 containerd[2047]: 2025-05-10 00:01:30.455 [INFO][6229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:30.517048 containerd[2047]: 2025-05-10 00:01:30.494 [INFO][6236] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" HandleID="k8s-pod-network.d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" Workload="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:30.517048 containerd[2047]: 2025-05-10 00:01:30.494 [INFO][6236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:30.517048 containerd[2047]: 2025-05-10 00:01:30.494 [INFO][6236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:30.517048 containerd[2047]: 2025-05-10 00:01:30.508 [WARNING][6236] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" HandleID="k8s-pod-network.d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" Workload="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:30.517048 containerd[2047]: 2025-05-10 00:01:30.509 [INFO][6236] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" HandleID="k8s-pod-network.d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" Workload="ip--172--31--31--45-k8s-csi--node--driver--j62d4-eth0" May 10 00:01:30.517048 containerd[2047]: 2025-05-10 00:01:30.511 [INFO][6236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:30.517048 containerd[2047]: 2025-05-10 00:01:30.514 [INFO][6229] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71" May 10 00:01:30.517851 containerd[2047]: time="2025-05-10T00:01:30.517103587Z" level=info msg="TearDown network for sandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\" successfully" May 10 00:01:30.523723 containerd[2047]: time="2025-05-10T00:01:30.523646143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:01:30.523917 containerd[2047]: time="2025-05-10T00:01:30.523756147Z" level=info msg="RemovePodSandbox \"d7958b2488c60caff49ca8bd40023dc041861743b64c2daf18d55c0109c6cc71\" returns successfully" May 10 00:01:30.524485 containerd[2047]: time="2025-05-10T00:01:30.524440015Z" level=info msg="StopPodSandbox for \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\"" May 10 00:01:30.657426 containerd[2047]: 2025-05-10 00:01:30.594 [WARNING][6254] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769", Pod:"coredns-7db6d8ff4d-lgml6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ea99bc4666", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:30.657426 containerd[2047]: 2025-05-10 00:01:30.595 [INFO][6254] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:30.657426 containerd[2047]: 2025-05-10 00:01:30.595 [INFO][6254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" iface="eth0" netns="" May 10 00:01:30.657426 containerd[2047]: 2025-05-10 00:01:30.595 [INFO][6254] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:30.657426 containerd[2047]: 2025-05-10 00:01:30.595 [INFO][6254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:30.657426 containerd[2047]: 2025-05-10 00:01:30.635 [INFO][6261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" HandleID="k8s-pod-network.24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:30.657426 containerd[2047]: 2025-05-10 00:01:30.635 [INFO][6261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:30.657426 containerd[2047]: 2025-05-10 00:01:30.635 [INFO][6261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:30.657426 containerd[2047]: 2025-05-10 00:01:30.647 [WARNING][6261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" HandleID="k8s-pod-network.24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:30.657426 containerd[2047]: 2025-05-10 00:01:30.647 [INFO][6261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" HandleID="k8s-pod-network.24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:30.657426 containerd[2047]: 2025-05-10 00:01:30.650 [INFO][6261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:30.657426 containerd[2047]: 2025-05-10 00:01:30.653 [INFO][6254] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:30.659568 containerd[2047]: time="2025-05-10T00:01:30.658016048Z" level=info msg="TearDown network for sandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\" successfully" May 10 00:01:30.659568 containerd[2047]: time="2025-05-10T00:01:30.658061672Z" level=info msg="StopPodSandbox for \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\" returns successfully" May 10 00:01:30.659568 containerd[2047]: time="2025-05-10T00:01:30.659009468Z" level=info msg="RemovePodSandbox for \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\"" May 10 00:01:30.659568 containerd[2047]: time="2025-05-10T00:01:30.659058896Z" level=info msg="Forcibly stopping sandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\"" May 10 00:01:30.795036 containerd[2047]: 2025-05-10 00:01:30.725 [WARNING][6279] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"674b9f1a-f6be-4bbd-9862-23fa9a9e4bf6", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"a060d32c15e17727a45b572b03210a3f0286dd96ba39e3a8c681ce6d47511769", Pod:"coredns-7db6d8ff4d-lgml6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ea99bc4666", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:30.795036 containerd[2047]: 2025-05-10 00:01:30.726 [INFO][6279] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:30.795036 containerd[2047]: 2025-05-10 00:01:30.726 [INFO][6279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" iface="eth0" netns="" May 10 00:01:30.795036 containerd[2047]: 2025-05-10 00:01:30.726 [INFO][6279] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:30.795036 containerd[2047]: 2025-05-10 00:01:30.726 [INFO][6279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:30.795036 containerd[2047]: 2025-05-10 00:01:30.774 [INFO][6286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" HandleID="k8s-pod-network.24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:30.795036 containerd[2047]: 2025-05-10 00:01:30.774 [INFO][6286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:30.795036 containerd[2047]: 2025-05-10 00:01:30.775 [INFO][6286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:30.795036 containerd[2047]: 2025-05-10 00:01:30.787 [WARNING][6286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" HandleID="k8s-pod-network.24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:30.795036 containerd[2047]: 2025-05-10 00:01:30.787 [INFO][6286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" HandleID="k8s-pod-network.24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--lgml6-eth0" May 10 00:01:30.795036 containerd[2047]: 2025-05-10 00:01:30.789 [INFO][6286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:30.795036 containerd[2047]: 2025-05-10 00:01:30.792 [INFO][6279] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41" May 10 00:01:30.796313 containerd[2047]: time="2025-05-10T00:01:30.795121700Z" level=info msg="TearDown network for sandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\" successfully" May 10 00:01:30.801426 containerd[2047]: time="2025-05-10T00:01:30.801359540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:01:30.801662 containerd[2047]: time="2025-05-10T00:01:30.801457844Z" level=info msg="RemovePodSandbox \"24d79aedf86b187b85930f67c26e133411a0ce7799b70f1cfc0c259da6b27b41\" returns successfully" May 10 00:01:30.802640 containerd[2047]: time="2025-05-10T00:01:30.802329272Z" level=info msg="StopPodSandbox for \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\"" May 10 00:01:30.945111 containerd[2047]: 2025-05-10 00:01:30.871 [WARNING][6306] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb912ddc-f907-4e24-afb8-4c04709bf4e6", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea", Pod:"coredns-7db6d8ff4d-jxvbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3998581dbd6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:30.945111 containerd[2047]: 2025-05-10 00:01:30.871 [INFO][6306] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:30.945111 containerd[2047]: 2025-05-10 00:01:30.871 [INFO][6306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" iface="eth0" netns="" May 10 00:01:30.945111 containerd[2047]: 2025-05-10 00:01:30.871 [INFO][6306] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:30.945111 containerd[2047]: 2025-05-10 00:01:30.871 [INFO][6306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:30.945111 containerd[2047]: 2025-05-10 00:01:30.921 [INFO][6313] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" HandleID="k8s-pod-network.d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:30.945111 containerd[2047]: 2025-05-10 00:01:30.921 [INFO][6313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:30.945111 containerd[2047]: 2025-05-10 00:01:30.921 [INFO][6313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:30.945111 containerd[2047]: 2025-05-10 00:01:30.933 [WARNING][6313] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" HandleID="k8s-pod-network.d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:30.945111 containerd[2047]: 2025-05-10 00:01:30.933 [INFO][6313] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" HandleID="k8s-pod-network.d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:30.945111 containerd[2047]: 2025-05-10 00:01:30.936 [INFO][6313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:30.945111 containerd[2047]: 2025-05-10 00:01:30.938 [INFO][6306] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:30.945111 containerd[2047]: time="2025-05-10T00:01:30.945065625Z" level=info msg="TearDown network for sandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\" successfully" May 10 00:01:30.947544 containerd[2047]: time="2025-05-10T00:01:30.945116661Z" level=info msg="StopPodSandbox for \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\" returns successfully" May 10 00:01:30.947544 containerd[2047]: time="2025-05-10T00:01:30.947371005Z" level=info msg="RemovePodSandbox for \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\"" May 10 00:01:30.947544 containerd[2047]: time="2025-05-10T00:01:30.947445777Z" level=info msg="Forcibly stopping sandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\"" May 10 00:01:31.084035 containerd[2047]: 2025-05-10 00:01:31.020 [WARNING][6331] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb912ddc-f907-4e24-afb8-4c04709bf4e6", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"aa1949a64a9e1666248f73ce330940ed222e40d943ce19b0c3a7e1e099ff1aea", Pod:"coredns-7db6d8ff4d-jxvbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3998581dbd6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:01:31.084035 containerd[2047]: 2025-05-10 00:01:31.021 [INFO][6331] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:31.084035 containerd[2047]: 2025-05-10 00:01:31.021 [INFO][6331] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" iface="eth0" netns="" May 10 00:01:31.084035 containerd[2047]: 2025-05-10 00:01:31.021 [INFO][6331] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:31.084035 containerd[2047]: 2025-05-10 00:01:31.021 [INFO][6331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:31.084035 containerd[2047]: 2025-05-10 00:01:31.059 [INFO][6338] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" HandleID="k8s-pod-network.d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:31.084035 containerd[2047]: 2025-05-10 00:01:31.060 [INFO][6338] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:01:31.084035 containerd[2047]: 2025-05-10 00:01:31.060 [INFO][6338] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:01:31.084035 containerd[2047]: 2025-05-10 00:01:31.075 [WARNING][6338] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" HandleID="k8s-pod-network.d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:31.084035 containerd[2047]: 2025-05-10 00:01:31.075 [INFO][6338] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" HandleID="k8s-pod-network.d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" Workload="ip--172--31--31--45-k8s-coredns--7db6d8ff4d--jxvbb-eth0" May 10 00:01:31.084035 containerd[2047]: 2025-05-10 00:01:31.077 [INFO][6338] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:01:31.084035 containerd[2047]: 2025-05-10 00:01:31.080 [INFO][6331] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321" May 10 00:01:31.084923 containerd[2047]: time="2025-05-10T00:01:31.084119982Z" level=info msg="TearDown network for sandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\" successfully" May 10 00:01:31.091590 containerd[2047]: time="2025-05-10T00:01:31.091517874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:01:31.091719 containerd[2047]: time="2025-05-10T00:01:31.091626618Z" level=info msg="RemovePodSandbox \"d28e5c87ca84d3c413fe84367de325bb1a281e4d5081b72c9041d02a75c6e321\" returns successfully" May 10 00:01:34.991880 systemd[1]: Started sshd@12-172.31.31.45:22-147.75.109.163:36270.service - OpenSSH per-connection server daemon (147.75.109.163:36270). May 10 00:01:35.200904 sshd[6368]: Accepted publickey for core from 147.75.109.163 port 36270 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:35.205387 sshd[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:35.219329 systemd-logind[2020]: New session 13 of user core. May 10 00:01:35.223917 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 00:01:35.500007 sshd[6368]: pam_unix(sshd:session): session closed for user core May 10 00:01:35.506656 systemd[1]: sshd@12-172.31.31.45:22-147.75.109.163:36270.service: Deactivated successfully. May 10 00:01:35.514480 systemd-logind[2020]: Session 13 logged out. Waiting for processes to exit. May 10 00:01:35.515500 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:01:35.519375 systemd-logind[2020]: Removed session 13. May 10 00:01:38.190200 kubelet[3375]: I0510 00:01:38.189712 3375 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:01:38.226140 kubelet[3375]: I0510 00:01:38.223732 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j62d4" podStartSLOduration=36.998992603 podStartE2EDuration="46.223711189s" podCreationTimestamp="2025-05-10 00:00:52 +0000 UTC" firstStartedPulling="2025-05-10 00:01:19.296149579 +0000 UTC m=+50.937782558" lastFinishedPulling="2025-05-10 00:01:28.520868165 +0000 UTC m=+60.162501144" observedRunningTime="2025-05-10 00:01:29.350238521 +0000 UTC m=+60.991871608" watchObservedRunningTime="2025-05-10 00:01:38.223711189 +0000 UTC m=+69.865344216" May 10 00:01:40.535417 systemd[1]: Started sshd@13-172.31.31.45:22-147.75.109.163:37796.service - OpenSSH per-connection server daemon (147.75.109.163:37796). May 10 00:01:40.727589 sshd[6391]: Accepted publickey for core from 147.75.109.163 port 37796 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:40.731549 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:40.739444 systemd-logind[2020]: New session 14 of user core. May 10 00:01:40.747877 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 00:01:41.021542 sshd[6391]: pam_unix(sshd:session): session closed for user core May 10 00:01:41.038598 systemd[1]: sshd@13-172.31.31.45:22-147.75.109.163:37796.service: Deactivated successfully. May 10 00:01:41.045686 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:01:41.047576 systemd-logind[2020]: Session 14 logged out. Waiting for processes to exit. May 10 00:01:41.050043 systemd-logind[2020]: Removed session 14. May 10 00:01:46.056950 systemd[1]: Started sshd@14-172.31.31.45:22-147.75.109.163:37802.service - OpenSSH per-connection server daemon (147.75.109.163:37802). May 10 00:01:46.232390 sshd[6410]: Accepted publickey for core from 147.75.109.163 port 37802 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:46.235183 sshd[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:46.244335 systemd-logind[2020]: New session 15 of user core. May 10 00:01:46.251174 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 00:01:46.518503 sshd[6410]: pam_unix(sshd:session): session closed for user core May 10 00:01:46.525656 systemd[1]: sshd@14-172.31.31.45:22-147.75.109.163:37802.service: Deactivated successfully. May 10 00:01:46.530692 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:01:46.532234 systemd-logind[2020]: Session 15 logged out. Waiting for processes to exit. May 10 00:01:46.535898 systemd-logind[2020]: Removed session 15. May 10 00:01:51.549748 systemd[1]: Started sshd@15-172.31.31.45:22-147.75.109.163:47888.service - OpenSSH per-connection server daemon (147.75.109.163:47888). May 10 00:01:51.724455 sshd[6447]: Accepted publickey for core from 147.75.109.163 port 47888 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:51.727234 sshd[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:51.736451 systemd-logind[2020]: New session 16 of user core. May 10 00:01:51.741771 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 00:01:52.062544 sshd[6447]: pam_unix(sshd:session): session closed for user core May 10 00:01:52.069246 systemd[1]: sshd@15-172.31.31.45:22-147.75.109.163:47888.service: Deactivated successfully. May 10 00:01:52.078979 systemd-logind[2020]: Session 16 logged out. Waiting for processes to exit. May 10 00:01:52.080822 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:01:52.097818 systemd[1]: Started sshd@16-172.31.31.45:22-147.75.109.163:47904.service - OpenSSH per-connection server daemon (147.75.109.163:47904). May 10 00:01:52.099508 systemd-logind[2020]: Removed session 16. May 10 00:01:52.289154 sshd[6461]: Accepted publickey for core from 147.75.109.163 port 47904 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:52.292205 sshd[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:52.301156 systemd-logind[2020]: New session 17 of user core. May 10 00:01:52.307899 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 00:01:52.865843 sshd[6461]: pam_unix(sshd:session): session closed for user core May 10 00:01:52.874118 systemd-logind[2020]: Session 17 logged out. Waiting for processes to exit. May 10 00:01:52.877474 systemd[1]: sshd@16-172.31.31.45:22-147.75.109.163:47904.service: Deactivated successfully. May 10 00:01:52.895930 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:01:52.902704 systemd-logind[2020]: Removed session 17. May 10 00:01:52.908942 systemd[1]: Started sshd@17-172.31.31.45:22-147.75.109.163:47916.service - OpenSSH per-connection server daemon (147.75.109.163:47916). May 10 00:01:53.112735 sshd[6473]: Accepted publickey for core from 147.75.109.163 port 47916 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:53.114768 sshd[6473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:53.123891 systemd-logind[2020]: New session 18 of user core. May 10 00:01:53.129931 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 00:01:56.850208 sshd[6473]: pam_unix(sshd:session): session closed for user core May 10 00:01:56.862587 systemd[1]: sshd@17-172.31.31.45:22-147.75.109.163:47916.service: Deactivated successfully. May 10 00:01:56.894345 systemd-logind[2020]: Session 18 logged out. Waiting for processes to exit. May 10 00:01:56.907189 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:01:56.933963 systemd[1]: Started sshd@18-172.31.31.45:22-147.75.109.163:57080.service - OpenSSH per-connection server daemon (147.75.109.163:57080). May 10 00:01:56.942112 systemd-logind[2020]: Removed session 18. May 10 00:01:57.139047 sshd[6496]: Accepted publickey for core from 147.75.109.163 port 57080 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:57.142197 sshd[6496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:57.150405 systemd-logind[2020]: New session 19 of user core. May 10 00:01:57.157892 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 00:01:57.741451 sshd[6496]: pam_unix(sshd:session): session closed for user core May 10 00:01:57.752745 systemd[1]: sshd@18-172.31.31.45:22-147.75.109.163:57080.service: Deactivated successfully. May 10 00:01:57.759072 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:01:57.760009 systemd-logind[2020]: Session 19 logged out. Waiting for processes to exit. May 10 00:01:57.775800 systemd[1]: Started sshd@19-172.31.31.45:22-147.75.109.163:57092.service - OpenSSH per-connection server daemon (147.75.109.163:57092). May 10 00:01:57.777915 systemd-logind[2020]: Removed session 19. May 10 00:01:57.974773 sshd[6510]: Accepted publickey for core from 147.75.109.163 port 57092 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:01:57.979586 sshd[6510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:01:58.002460 systemd-logind[2020]: New session 20 of user core. May 10 00:01:58.004909 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 00:01:58.281536 sshd[6510]: pam_unix(sshd:session): session closed for user core May 10 00:01:58.299237 systemd[1]: sshd@19-172.31.31.45:22-147.75.109.163:57092.service: Deactivated successfully. May 10 00:01:58.310189 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:01:58.313938 systemd-logind[2020]: Session 20 logged out. Waiting for processes to exit. May 10 00:01:58.316557 systemd-logind[2020]: Removed session 20. May 10 00:02:03.317601 systemd[1]: Started sshd@20-172.31.31.45:22-147.75.109.163:57104.service - OpenSSH per-connection server daemon (147.75.109.163:57104). May 10 00:02:03.521683 sshd[6545]: Accepted publickey for core from 147.75.109.163 port 57104 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:03.524461 sshd[6545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:03.534398 systemd-logind[2020]: New session 21 of user core. May 10 00:02:03.541806 systemd[1]: Started session-21.scope - Session 21 of User core. May 10 00:02:03.807988 sshd[6545]: pam_unix(sshd:session): session closed for user core May 10 00:02:03.819223 systemd[1]: sshd@20-172.31.31.45:22-147.75.109.163:57104.service: Deactivated successfully. May 10 00:02:03.824835 systemd-logind[2020]: Session 21 logged out. Waiting for processes to exit. May 10 00:02:03.825333 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:02:03.830973 systemd-logind[2020]: Removed session 21. May 10 00:02:08.835807 systemd[1]: Started sshd@21-172.31.31.45:22-147.75.109.163:46684.service - OpenSSH per-connection server daemon (147.75.109.163:46684). May 10 00:02:09.020834 sshd[6562]: Accepted publickey for core from 147.75.109.163 port 46684 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:09.023548 sshd[6562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:09.031412 systemd-logind[2020]: New session 22 of user core. May 10 00:02:09.041937 systemd[1]: Started session-22.scope - Session 22 of User core. May 10 00:02:09.297885 sshd[6562]: pam_unix(sshd:session): session closed for user core May 10 00:02:09.305196 systemd[1]: sshd@21-172.31.31.45:22-147.75.109.163:46684.service: Deactivated successfully. May 10 00:02:09.313663 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:02:09.315133 systemd-logind[2020]: Session 22 logged out. Waiting for processes to exit. May 10 00:02:09.317142 systemd-logind[2020]: Removed session 22. May 10 00:02:14.328715 systemd[1]: Started sshd@22-172.31.31.45:22-147.75.109.163:46700.service - OpenSSH per-connection server daemon (147.75.109.163:46700). May 10 00:02:14.502722 sshd[6577]: Accepted publickey for core from 147.75.109.163 port 46700 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:14.505436 sshd[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:14.513801 systemd-logind[2020]: New session 23 of user core. May 10 00:02:14.519917 systemd[1]: Started session-23.scope - Session 23 of User core. May 10 00:02:14.768597 sshd[6577]: pam_unix(sshd:session): session closed for user core May 10 00:02:14.775090 systemd[1]: sshd@22-172.31.31.45:22-147.75.109.163:46700.service: Deactivated successfully. May 10 00:02:14.784246 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:02:14.786454 systemd-logind[2020]: Session 23 logged out. Waiting for processes to exit. May 10 00:02:14.788686 systemd-logind[2020]: Removed session 23. May 10 00:02:19.799768 systemd[1]: Started sshd@23-172.31.31.45:22-147.75.109.163:43310.service - OpenSSH per-connection server daemon (147.75.109.163:43310). May 10 00:02:19.976092 sshd[6593]: Accepted publickey for core from 147.75.109.163 port 43310 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:19.982084 sshd[6593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:19.994279 systemd-logind[2020]: New session 24 of user core. May 10 00:02:20.003036 systemd[1]: Started session-24.scope - Session 24 of User core. May 10 00:02:20.288918 sshd[6593]: pam_unix(sshd:session): session closed for user core May 10 00:02:20.297717 systemd[1]: sshd@23-172.31.31.45:22-147.75.109.163:43310.service: Deactivated successfully. May 10 00:02:20.303944 systemd-logind[2020]: Session 24 logged out. Waiting for processes to exit. May 10 00:02:20.304040 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:02:20.307756 systemd-logind[2020]: Removed session 24. May 10 00:02:25.320758 systemd[1]: Started sshd@24-172.31.31.45:22-147.75.109.163:43316.service - OpenSSH per-connection server daemon (147.75.109.163:43316). May 10 00:02:25.510776 sshd[6629]: Accepted publickey for core from 147.75.109.163 port 43316 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:25.513730 sshd[6629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:25.521855 systemd-logind[2020]: New session 25 of user core. May 10 00:02:25.528973 systemd[1]: Started session-25.scope - Session 25 of User core. May 10 00:02:25.784335 sshd[6629]: pam_unix(sshd:session): session closed for user core May 10 00:02:25.791622 systemd[1]: sshd@24-172.31.31.45:22-147.75.109.163:43316.service: Deactivated successfully. May 10 00:02:25.799333 systemd-logind[2020]: Session 25 logged out. Waiting for processes to exit. May 10 00:02:25.799603 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:02:25.802656 systemd-logind[2020]: Removed session 25. May 10 00:02:30.813929 systemd[1]: Started sshd@25-172.31.31.45:22-147.75.109.163:52968.service - OpenSSH per-connection server daemon (147.75.109.163:52968). May 10 00:02:30.985394 sshd[6663]: Accepted publickey for core from 147.75.109.163 port 52968 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:02:30.988088 sshd[6663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:02:30.996475 systemd-logind[2020]: New session 26 of user core. May 10 00:02:31.002980 systemd[1]: Started session-26.scope - Session 26 of User core. May 10 00:02:31.244396 sshd[6663]: pam_unix(sshd:session): session closed for user core May 10 00:02:31.250451 systemd[1]: sshd@25-172.31.31.45:22-147.75.109.163:52968.service: Deactivated successfully. May 10 00:02:31.258824 systemd-logind[2020]: Session 26 logged out. Waiting for processes to exit. May 10 00:02:31.259714 systemd[1]: session-26.scope: Deactivated successfully. May 10 00:02:31.263174 systemd-logind[2020]: Removed session 26. May 10 00:02:44.458344 containerd[2047]: time="2025-05-10T00:02:44.458147334Z" level=info msg="shim disconnected" id=5c5292370dd5759ebd601675ba97d590962cd60633bc1566033dc82759dd97e5 namespace=k8s.io May 10 00:02:44.458344 containerd[2047]: time="2025-05-10T00:02:44.458251770Z" level=warning msg="cleaning up after shim disconnected" id=5c5292370dd5759ebd601675ba97d590962cd60633bc1566033dc82759dd97e5 namespace=k8s.io May 10 00:02:44.459149 containerd[2047]: time="2025-05-10T00:02:44.458355198Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:02:44.465572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c5292370dd5759ebd601675ba97d590962cd60633bc1566033dc82759dd97e5-rootfs.mount: Deactivated successfully. May 10 00:02:44.566831 kubelet[3375]: I0510 00:02:44.566752 3375 scope.go:117] "RemoveContainer" containerID="5c5292370dd5759ebd601675ba97d590962cd60633bc1566033dc82759dd97e5" May 10 00:02:44.572537 containerd[2047]: time="2025-05-10T00:02:44.572220991Z" level=info msg="CreateContainer within sandbox \"5945d599b9ad4376008fbb7704f6af4c02d5f1b75d800943c8ec53b02ad8b7ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 10 00:02:44.598486 containerd[2047]: time="2025-05-10T00:02:44.598405879Z" level=info msg="CreateContainer within sandbox \"5945d599b9ad4376008fbb7704f6af4c02d5f1b75d800943c8ec53b02ad8b7ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2a0f203386000024df81a18298dc7cfc764aa7c0d01201119a019feea7d7d7bd\"" May 10 00:02:44.601642 containerd[2047]: time="2025-05-10T00:02:44.599229871Z" level=info msg="StartContainer for \"2a0f203386000024df81a18298dc7cfc764aa7c0d01201119a019feea7d7d7bd\"" May 10 00:02:44.726717 containerd[2047]: time="2025-05-10T00:02:44.726643424Z" level=info msg="StartContainer for \"2a0f203386000024df81a18298dc7cfc764aa7c0d01201119a019feea7d7d7bd\" returns successfully" May 10 00:02:44.861511 containerd[2047]: time="2025-05-10T00:02:44.861390440Z" level=info msg="shim disconnected" id=477a43a7e6e6a72f4b4a3ffd7d9372caf3af1ab829bcb6aa70d2a1685043d472 namespace=k8s.io May 10 00:02:44.862339 containerd[2047]: time="2025-05-10T00:02:44.861505184Z" level=warning msg="cleaning up after shim disconnected" id=477a43a7e6e6a72f4b4a3ffd7d9372caf3af1ab829bcb6aa70d2a1685043d472 namespace=k8s.io May 10 00:02:44.862339 containerd[2047]: time="2025-05-10T00:02:44.861552896Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:02:45.461521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-477a43a7e6e6a72f4b4a3ffd7d9372caf3af1ab829bcb6aa70d2a1685043d472-rootfs.mount: Deactivated successfully. May 10 00:02:45.583418 kubelet[3375]: I0510 00:02:45.583173 3375 scope.go:117] "RemoveContainer" containerID="477a43a7e6e6a72f4b4a3ffd7d9372caf3af1ab829bcb6aa70d2a1685043d472" May 10 00:02:45.589112 containerd[2047]: time="2025-05-10T00:02:45.588682724Z" level=info msg="CreateContainer within sandbox \"27249a29e0e0708132bd99e3685e42b6270be6cac558cde69490ce7200b5b164\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 10 00:02:45.628811 containerd[2047]: time="2025-05-10T00:02:45.621924500Z" level=info msg="CreateContainer within sandbox \"27249a29e0e0708132bd99e3685e42b6270be6cac558cde69490ce7200b5b164\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2081615e5f061b4c0cf92015b60d1cde4fea7e56a8a5651457565bb8cfce1dd8\"" May 10 00:02:45.629565 containerd[2047]: time="2025-05-10T00:02:45.629044388Z" level=info msg="StartContainer for \"2081615e5f061b4c0cf92015b60d1cde4fea7e56a8a5651457565bb8cfce1dd8\"" May 10 00:02:45.817722 containerd[2047]: time="2025-05-10T00:02:45.817645065Z" level=info msg="StartContainer for \"2081615e5f061b4c0cf92015b60d1cde4fea7e56a8a5651457565bb8cfce1dd8\" returns successfully" May 10 00:02:50.797296 containerd[2047]: time="2025-05-10T00:02:50.796589498Z" level=info msg="shim disconnected" id=088f80f69c235a7b01c214878872cd0d4b7444e2f9eba3d181ca59fe2a78febc namespace=k8s.io May 10 00:02:50.797296 containerd[2047]: time="2025-05-10T00:02:50.796668026Z" level=warning msg="cleaning up after shim disconnected" id=088f80f69c235a7b01c214878872cd0d4b7444e2f9eba3d181ca59fe2a78febc namespace=k8s.io May 10 00:02:50.797296 containerd[2047]: time="2025-05-10T00:02:50.796688138Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:02:50.803282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-088f80f69c235a7b01c214878872cd0d4b7444e2f9eba3d181ca59fe2a78febc-rootfs.mount: Deactivated successfully. May 10 00:02:51.164777 kubelet[3375]: E0510 00:02:51.164026 3375 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-45?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 10 00:02:51.618330 kubelet[3375]: I0510 00:02:51.618249 3375 scope.go:117] "RemoveContainer" containerID="088f80f69c235a7b01c214878872cd0d4b7444e2f9eba3d181ca59fe2a78febc" May 10 00:02:51.621957 containerd[2047]: time="2025-05-10T00:02:51.621847406Z" level=info msg="CreateContainer within sandbox \"f4cd365e65baca5a4f11539fa157427aef7cea47d741ac310b5edfa158a87fae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 10 00:02:51.648412 containerd[2047]: time="2025-05-10T00:02:51.648347882Z" level=info msg="CreateContainer within sandbox \"f4cd365e65baca5a4f11539fa157427aef7cea47d741ac310b5edfa158a87fae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"eea8115945f763c7e28e0c918b6eb300f69bc04e3329154df8c6b45b4dd47365\"" May 10 00:02:51.649633 containerd[2047]: time="2025-05-10T00:02:51.649073078Z" level=info msg="StartContainer for \"eea8115945f763c7e28e0c918b6eb300f69bc04e3329154df8c6b45b4dd47365\"" May 10 00:02:51.768093 containerd[2047]: time="2025-05-10T00:02:51.768021471Z" level=info msg="StartContainer for \"eea8115945f763c7e28e0c918b6eb300f69bc04e3329154df8c6b45b4dd47365\" returns successfully" May 10 00:03:01.165884 kubelet[3375]: E0510 00:03:01.165796 3375 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-45?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"