Apr 30 00:44:14.199218 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 30 00:44:14.199265 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:44:14.199292 kernel: KASLR disabled due to lack of seed Apr 30 00:44:14.199309 kernel: efi: EFI v2.7 by EDK II Apr 30 00:44:14.199325 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Apr 30 00:44:14.199341 kernel: ACPI: Early table checksum verification disabled Apr 30 00:44:14.199359 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 30 00:44:14.199375 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 30 00:44:14.199391 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 30 00:44:14.199407 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Apr 30 00:44:14.199427 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 30 00:44:14.199443 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 30 00:44:14.199459 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 30 00:44:14.199475 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 30 00:44:14.199494 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 30 00:44:14.199514 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 30 00:44:14.199532 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 30 00:44:14.199549 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 30 00:44:14.199565 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 30 00:44:14.199582 kernel: printk: bootconsole [uart0] enabled Apr 30 00:44:14.199598 kernel: NUMA: Failed to initialise from firmware Apr 30 00:44:14.199616 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 30 00:44:14.199633 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 30 00:44:14.199650 kernel: Zone ranges: Apr 30 00:44:14.199667 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 00:44:14.199683 kernel: DMA32 empty Apr 30 00:44:14.199704 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 30 00:44:14.199722 kernel: Movable zone start for each node Apr 30 00:44:14.199741 kernel: Early memory node ranges Apr 30 00:44:14.199757 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 30 00:44:14.199774 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 30 00:44:14.199791 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 30 00:44:14.199808 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 30 00:44:14.199825 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 30 00:44:14.199843 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 30 00:44:14.199860 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 30 00:44:14.199877 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 30 00:44:14.199894 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 30 00:44:14.199915 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 30 00:44:14.199935 kernel: psci: probing for conduit method from ACPI. Apr 30 00:44:14.199960 kernel: psci: PSCIv1.0 detected in firmware. Apr 30 00:44:14.199979 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:44:14.199998 kernel: psci: Trusted OS migration not required Apr 30 00:44:14.200020 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:44:14.200038 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:44:14.200056 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:44:14.200075 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:44:14.200095 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:44:14.200196 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:44:14.200218 kernel: CPU features: detected: Spectre-v2 Apr 30 00:44:14.200237 kernel: CPU features: detected: Spectre-v3a Apr 30 00:44:14.200255 kernel: CPU features: detected: Spectre-BHB Apr 30 00:44:14.200273 kernel: CPU features: detected: ARM erratum 1742098 Apr 30 00:44:14.200290 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 30 00:44:14.200316 kernel: alternatives: applying boot alternatives Apr 30 00:44:14.200337 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:44:14.200356 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:44:14.200374 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:44:14.200392 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:44:14.200409 kernel: Fallback order for Node 0: 0 Apr 30 00:44:14.200427 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 30 00:44:14.200444 kernel: Policy zone: Normal Apr 30 00:44:14.200462 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:44:14.200479 kernel: software IO TLB: area num 2. Apr 30 00:44:14.200497 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 30 00:44:14.200521 kernel: Memory: 3820152K/4030464K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 210312K reserved, 0K cma-reserved) Apr 30 00:44:14.200539 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:44:14.200556 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:44:14.200575 kernel: rcu: RCU event tracing is enabled. Apr 30 00:44:14.200593 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:44:14.200611 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:44:14.200629 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:44:14.200648 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:44:14.200665 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:44:14.200682 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:44:14.200699 kernel: GICv3: 96 SPIs implemented Apr 30 00:44:14.200721 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:44:14.200738 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:44:14.200755 kernel: GICv3: GICv3 features: 16 PPIs Apr 30 00:44:14.200773 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 30 00:44:14.200790 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 30 00:44:14.200807 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:44:14.200825 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:44:14.200842 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 30 00:44:14.200859 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 30 00:44:14.200876 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 30 00:44:14.200894 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:44:14.200911 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 30 00:44:14.200932 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 30 00:44:14.200950 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 30 00:44:14.200967 kernel: Console: colour dummy device 80x25 Apr 30 00:44:14.200985 kernel: printk: console [tty1] enabled Apr 30 00:44:14.201003 kernel: ACPI: Core revision 20230628 Apr 30 00:44:14.201021 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 30 00:44:14.201039 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:44:14.201056 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:44:14.201074 kernel: landlock: Up and running. Apr 30 00:44:14.201095 kernel: SELinux: Initializing. Apr 30 00:44:14.201195 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:44:14.201215 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:44:14.201234 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:44:14.201252 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:44:14.201269 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:44:14.201287 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:44:14.201305 kernel: Platform MSI: ITS@0x10080000 domain created Apr 30 00:44:14.201323 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 30 00:44:14.201346 kernel: Remapping and enabling EFI services. Apr 30 00:44:14.201365 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:44:14.201382 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:44:14.201400 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 30 00:44:14.201417 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 30 00:44:14.201435 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 30 00:44:14.201453 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:44:14.201470 kernel: SMP: Total of 2 processors activated. Apr 30 00:44:14.201488 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:44:14.201509 kernel: CPU features: detected: 32-bit EL1 Support Apr 30 00:44:14.201527 kernel: CPU features: detected: CRC32 instructions Apr 30 00:44:14.201545 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:44:14.201574 kernel: alternatives: applying system-wide alternatives Apr 30 00:44:14.201597 kernel: devtmpfs: initialized Apr 30 00:44:14.201615 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:44:14.201634 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:44:14.201653 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:44:14.201671 kernel: SMBIOS 3.0.0 present. Apr 30 00:44:14.201691 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 30 00:44:14.201714 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:44:14.201733 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:44:14.201751 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:44:14.201770 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:44:14.201788 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:44:14.201807 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Apr 30 00:44:14.201827 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:44:14.201851 kernel: cpuidle: using governor menu Apr 30 00:44:14.201870 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:44:14.201888 kernel: ASID allocator initialised with 65536 entries Apr 30 00:44:14.201906 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:44:14.201925 kernel: Serial: AMBA PL011 UART driver Apr 30 00:44:14.201944 kernel: Modules: 17504 pages in range for non-PLT usage Apr 30 00:44:14.201963 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:44:14.201982 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:44:14.202001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:44:14.202024 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:44:14.202043 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:44:14.202061 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:44:14.202079 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:44:14.202113 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:44:14.202138 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:44:14.202157 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:44:14.202202 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:44:14.202222 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:44:14.202247 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:44:14.202266 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:44:14.202284 kernel: ACPI: Interpreter enabled Apr 30 00:44:14.202302 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:44:14.202320 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:44:14.202339 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Apr 30 00:44:14.202635 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:44:14.202849 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:44:14.203057 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:44:14.203305 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Apr 30 00:44:14.203532 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Apr 30 00:44:14.203563 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 30 00:44:14.203584 kernel: acpiphp: Slot [1] registered Apr 30 00:44:14.203604 kernel: acpiphp: Slot [2] registered Apr 30 00:44:14.203623 kernel: acpiphp: Slot [3] registered Apr 30 00:44:14.203643 kernel: acpiphp: Slot [4] registered Apr 30 00:44:14.203686 kernel: acpiphp: Slot [5] registered Apr 30 00:44:14.203758 kernel: acpiphp: Slot [6] registered Apr 30 00:44:14.203804 kernel: acpiphp: Slot [7] registered Apr 30 00:44:14.203827 kernel: acpiphp: Slot [8] registered Apr 30 00:44:14.203846 kernel: acpiphp: Slot [9] registered Apr 30 00:44:14.203866 kernel: acpiphp: Slot [10] registered Apr 30 00:44:14.203893 kernel: acpiphp: Slot [11] registered Apr 30 00:44:14.203914 kernel: acpiphp: Slot [12] registered Apr 30 00:44:14.203934 kernel: acpiphp: Slot [13] registered Apr 30 00:44:14.203953 kernel: acpiphp: Slot [14] registered Apr 30 00:44:14.203979 kernel: acpiphp: Slot [15] registered Apr 30 00:44:14.203998 kernel: acpiphp: Slot [16] registered Apr 30 00:44:14.204017 kernel: acpiphp: Slot [17] registered Apr 30 00:44:14.204035 kernel: acpiphp: Slot [18] registered Apr 30 00:44:14.204054 kernel: acpiphp: Slot [19] registered Apr 30 00:44:14.204072 kernel: acpiphp: Slot [20] registered Apr 30 00:44:14.204090 kernel: acpiphp: Slot [21] registered Apr 30 00:44:14.205705 kernel: acpiphp: Slot [22] registered Apr 30 00:44:14.207147 kernel: acpiphp: Slot [23] registered Apr 30 00:44:14.207231 kernel: acpiphp: Slot [24] registered Apr 30 00:44:14.207252 kernel: acpiphp: Slot [25] registered Apr 30 00:44:14.207271 kernel: acpiphp: Slot [26] registered Apr 30 00:44:14.207290 kernel: acpiphp: Slot [27] registered Apr 30 00:44:14.207308 kernel: acpiphp: Slot [28] registered Apr 30 00:44:14.207328 kernel: acpiphp: Slot [29] registered Apr 30 00:44:14.207347 kernel: acpiphp: Slot [30] registered Apr 30 00:44:14.207367 kernel: acpiphp: Slot [31] registered Apr 30 00:44:14.207385 kernel: PCI host bridge to bus 0000:00 Apr 30 00:44:14.207699 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 30 00:44:14.207903 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:44:14.208091 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 30 00:44:14.210374 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Apr 30 00:44:14.210618 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 30 00:44:14.210849 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 30 00:44:14.211080 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 30 00:44:14.211435 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 30 00:44:14.211660 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 30 00:44:14.211882 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 00:44:14.213306 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 30 00:44:14.213644 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 30 00:44:14.213869 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 30 00:44:14.214091 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 30 00:44:14.215577 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 00:44:14.215793 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Apr 30 00:44:14.216023 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Apr 30 00:44:14.218684 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Apr 30 00:44:14.218922 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Apr 30 00:44:14.219176 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Apr 30 00:44:14.219394 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 30 00:44:14.219601 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:44:14.219796 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 30 00:44:14.219824 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:44:14.219845 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:44:14.219865 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:44:14.219884 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:44:14.219903 kernel: iommu: Default domain type: Translated Apr 30 00:44:14.219922 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:44:14.219950 kernel: efivars: Registered efivars operations Apr 30 00:44:14.219969 kernel: vgaarb: loaded Apr 30 00:44:14.219987 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:44:14.220006 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:44:14.220025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:44:14.220044 kernel: pnp: PnP ACPI init Apr 30 00:44:14.224263 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 30 00:44:14.224311 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:44:14.224342 kernel: NET: Registered PF_INET protocol family Apr 30 00:44:14.224362 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:44:14.224382 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:44:14.224401 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:44:14.224420 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:44:14.224438 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:44:14.224457 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:44:14.224476 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:44:14.224494 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:44:14.224517 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:44:14.224535 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:44:14.224554 kernel: kvm [1]: HYP mode not available Apr 30 00:44:14.224572 kernel: Initialise system trusted keyrings Apr 30 00:44:14.224591 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:44:14.224609 kernel: Key type asymmetric registered Apr 30 00:44:14.224627 kernel: Asymmetric key parser 'x509' registered Apr 30 00:44:14.224646 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:44:14.224665 kernel: io scheduler mq-deadline registered Apr 30 00:44:14.224687 kernel: io scheduler kyber registered Apr 30 00:44:14.224706 kernel: io scheduler bfq registered Apr 30 00:44:14.224950 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 30 00:44:14.224978 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:44:14.224998 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:44:14.225017 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 30 00:44:14.225036 kernel: ACPI: button: Sleep Button [SLPB] Apr 30 00:44:14.225055 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:44:14.225079 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 00:44:14.225328 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 30 00:44:14.225356 kernel: printk: console [ttyS0] disabled Apr 30 00:44:14.225376 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 30 00:44:14.225395 kernel: printk: console [ttyS0] enabled Apr 30 00:44:14.225413 kernel: printk: bootconsole [uart0] disabled Apr 30 00:44:14.225432 kernel: thunder_xcv, ver 1.0 Apr 30 00:44:14.225450 kernel: thunder_bgx, ver 1.0 Apr 30 00:44:14.225469 kernel: nicpf, ver 1.0 Apr 30 00:44:14.225493 kernel: nicvf, ver 1.0 Apr 30 00:44:14.227601 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:44:14.227815 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:44:13 UTC (1745973853) Apr 30 00:44:14.227842 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:44:14.227862 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 30 00:44:14.227881 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:44:14.227900 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:44:14.227919 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:44:14.227945 kernel: Segment Routing with IPv6 Apr 30 00:44:14.227966 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:44:14.227985 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:44:14.228003 kernel: Key type dns_resolver registered Apr 30 00:44:14.228022 kernel: registered taskstats version 1 Apr 30 00:44:14.228040 kernel: Loading compiled-in X.509 certificates Apr 30 00:44:14.228059 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:44:14.228078 kernel: Key type .fscrypt registered Apr 30 00:44:14.228123 kernel: Key type fscrypt-provisioning registered Apr 30 00:44:14.228172 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:44:14.228195 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:44:14.228214 kernel: ima: No architecture policies found Apr 30 00:44:14.228233 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:44:14.228252 kernel: clk: Disabling unused clocks Apr 30 00:44:14.228270 kernel: Freeing unused kernel memory: 39424K Apr 30 00:44:14.228289 kernel: Run /init as init process Apr 30 00:44:14.228308 kernel: with arguments: Apr 30 00:44:14.228326 kernel: /init Apr 30 00:44:14.228344 kernel: with environment: Apr 30 00:44:14.228369 kernel: HOME=/ Apr 30 00:44:14.228388 kernel: TERM=linux Apr 30 00:44:14.228407 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:44:14.228431 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:44:14.228455 systemd[1]: Detected virtualization amazon. Apr 30 00:44:14.228476 systemd[1]: Detected architecture arm64. Apr 30 00:44:14.228496 systemd[1]: Running in initrd. Apr 30 00:44:14.228520 systemd[1]: No hostname configured, using default hostname. Apr 30 00:44:14.228540 systemd[1]: Hostname set to . Apr 30 00:44:14.228561 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:44:14.228582 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:44:14.228603 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:44:14.228623 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:44:14.228645 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:44:14.228666 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:44:14.228692 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:44:14.228713 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:44:14.228737 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:44:14.228758 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:44:14.228779 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:44:14.228799 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:44:14.228820 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:44:14.228845 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:44:14.228866 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:44:14.228886 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:44:14.228907 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:44:14.228927 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:44:14.228948 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:44:14.228968 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:44:14.228989 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:44:14.229009 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:44:14.229034 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:44:14.229055 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:44:14.229075 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:44:14.229095 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:44:14.229165 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:44:14.229188 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:44:14.229210 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:44:14.229231 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:44:14.229260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:44:14.229283 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:44:14.229304 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:44:14.229325 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:44:14.229347 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:44:14.229428 systemd-journald[251]: Collecting audit messages is disabled. Apr 30 00:44:14.229475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:44:14.229498 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:44:14.229526 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:44:14.229547 kernel: Bridge firewalling registered Apr 30 00:44:14.229567 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:44:14.229587 systemd-journald[251]: Journal started Apr 30 00:44:14.229624 systemd-journald[251]: Runtime Journal (/run/log/journal/ec27f3e77ed6a0548f6bc26e8e473beb) is 8.0M, max 75.3M, 67.3M free. Apr 30 00:44:14.170422 systemd-modules-load[252]: Inserted module 'overlay' Apr 30 00:44:14.219383 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 30 00:44:14.237291 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:44:14.245128 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:44:14.254460 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:44:14.266498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:44:14.273373 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:44:14.285703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:44:14.314903 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:44:14.332440 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:44:14.333811 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:44:14.352212 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:44:14.368411 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:44:14.402230 systemd-resolved[281]: Positive Trust Anchors: Apr 30 00:44:14.402265 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:44:14.402327 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:44:14.426567 dracut-cmdline[289]: dracut-dracut-053 Apr 30 00:44:14.426567 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:44:14.556154 kernel: SCSI subsystem initialized Apr 30 00:44:14.564133 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:44:14.577140 kernel: iscsi: registered transport (tcp) Apr 30 00:44:14.599210 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:44:14.599284 kernel: QLogic iSCSI HBA Driver Apr 30 00:44:14.678196 kernel: random: crng init done Apr 30 00:44:14.678376 systemd-resolved[281]: Defaulting to hostname 'linux'. Apr 30 00:44:14.681775 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:44:14.691193 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:44:14.706214 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:44:14.715416 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:44:14.758284 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:44:14.758373 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:44:14.758401 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:44:14.825144 kernel: raid6: neonx8 gen() 6750 MB/s Apr 30 00:44:14.842132 kernel: raid6: neonx4 gen() 6569 MB/s Apr 30 00:44:14.859132 kernel: raid6: neonx2 gen() 5471 MB/s Apr 30 00:44:14.876130 kernel: raid6: neonx1 gen() 3969 MB/s Apr 30 00:44:14.893135 kernel: raid6: int64x8 gen() 3826 MB/s Apr 30 00:44:14.910132 kernel: raid6: int64x4 gen() 3713 MB/s Apr 30 00:44:14.927131 kernel: raid6: int64x2 gen() 3612 MB/s Apr 30 00:44:14.944966 kernel: raid6: int64x1 gen() 2770 MB/s Apr 30 00:44:14.944999 kernel: raid6: using algorithm neonx8 gen() 6750 MB/s Apr 30 00:44:14.962939 kernel: raid6: .... xor() 4864 MB/s, rmw enabled Apr 30 00:44:14.962979 kernel: raid6: using neon recovery algorithm Apr 30 00:44:14.971584 kernel: xor: measuring software checksum speed Apr 30 00:44:14.971636 kernel: 8regs : 10948 MB/sec Apr 30 00:44:14.972710 kernel: 32regs : 11942 MB/sec Apr 30 00:44:14.973924 kernel: arm64_neon : 9284 MB/sec Apr 30 00:44:14.973957 kernel: xor: using function: 32regs (11942 MB/sec) Apr 30 00:44:15.058152 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:44:15.077303 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:44:15.088398 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:44:15.130889 systemd-udevd[470]: Using default interface naming scheme 'v255'. Apr 30 00:44:15.140081 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:44:15.152584 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:44:15.189216 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Apr 30 00:44:15.246990 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:44:15.257426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:44:15.384532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:44:15.397414 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:44:15.446329 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:44:15.456204 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:44:15.466727 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:44:15.471830 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:44:15.489399 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:44:15.535740 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:44:15.580662 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:44:15.580726 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 30 00:44:15.624510 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 30 00:44:15.624778 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 30 00:44:15.625013 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:3b:ef:c6:72:19 Apr 30 00:44:15.615162 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:44:15.615285 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:44:15.618351 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:44:15.620593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:44:15.620739 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:44:15.623025 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:44:15.629231 (udev-worker)[517]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:44:15.645992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:44:15.663875 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 00:44:15.663941 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 30 00:44:15.675923 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:44:15.683143 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 30 00:44:15.686379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:44:15.701137 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:44:15.701208 kernel: GPT:9289727 != 16777215 Apr 30 00:44:15.701234 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:44:15.703033 kernel: GPT:9289727 != 16777215 Apr 30 00:44:15.703087 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:44:15.703138 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:44:15.724163 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:44:15.795633 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (534) Apr 30 00:44:15.817137 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (526) Apr 30 00:44:15.846909 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 30 00:44:15.931268 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 30 00:44:15.961593 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 30 00:44:15.976534 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 30 00:44:15.994018 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 00:44:16.007564 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:44:16.026208 disk-uuid[660]: Primary Header is updated. Apr 30 00:44:16.026208 disk-uuid[660]: Secondary Entries is updated. Apr 30 00:44:16.026208 disk-uuid[660]: Secondary Header is updated. Apr 30 00:44:16.036163 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:44:16.044834 kernel: GPT:disk_guids don't match. Apr 30 00:44:16.044912 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:44:16.044940 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:44:16.056176 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:44:17.062145 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:44:17.063820 disk-uuid[661]: The operation has completed successfully. Apr 30 00:44:17.251263 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:44:17.254194 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:44:17.315347 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:44:17.325379 sh[1004]: Success Apr 30 00:44:17.351147 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:44:17.470602 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:44:17.482517 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:44:17.491343 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:44:17.525127 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:44:17.525191 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:44:17.525218 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:44:17.526816 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:44:17.528057 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:44:17.557133 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 00:44:17.561973 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:44:17.565491 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:44:17.582457 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:44:17.589406 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:44:17.622548 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:44:17.622622 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:44:17.622658 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 00:44:17.632609 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 00:44:17.651276 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:44:17.650856 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:44:17.661182 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:44:17.674430 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:44:17.785209 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:44:17.795373 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:44:17.862472 systemd-networkd[1199]: lo: Link UP Apr 30 00:44:17.863204 systemd-networkd[1199]: lo: Gained carrier Apr 30 00:44:17.868663 systemd-networkd[1199]: Enumeration completed Apr 30 00:44:17.870396 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:44:17.873066 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:44:17.873074 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:44:17.883070 systemd-networkd[1199]: eth0: Link UP Apr 30 00:44:17.883093 systemd-networkd[1199]: eth0: Gained carrier Apr 30 00:44:17.883152 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:44:17.888811 systemd[1]: Reached target network.target - Network. Apr 30 00:44:17.905327 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.25.63/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 00:44:17.915901 ignition[1113]: Ignition 2.19.0 Apr 30 00:44:17.915921 ignition[1113]: Stage: fetch-offline Apr 30 00:44:17.916468 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:44:17.916492 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:44:17.916982 ignition[1113]: Ignition finished successfully Apr 30 00:44:17.923182 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:44:17.935456 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:44:17.969306 ignition[1207]: Ignition 2.19.0 Apr 30 00:44:17.969339 ignition[1207]: Stage: fetch Apr 30 00:44:17.970426 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:44:17.970452 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:44:17.970610 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:44:18.007423 ignition[1207]: PUT result: OK Apr 30 00:44:18.010714 ignition[1207]: parsed url from cmdline: "" Apr 30 00:44:18.010741 ignition[1207]: no config URL provided Apr 30 00:44:18.010761 ignition[1207]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:44:18.010790 ignition[1207]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:44:18.010837 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:44:18.014549 ignition[1207]: PUT result: OK Apr 30 00:44:18.014704 ignition[1207]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 30 00:44:18.022022 ignition[1207]: GET result: OK Apr 30 00:44:18.022229 ignition[1207]: parsing config with SHA512: 1555ac84ab770edc4b1f56122a81f183ae1e63986ab8d6e395ab17f8deea8168b9a15990230ec8892a5efee609d22df8b1f3b0e00d2365b535c41502a8a25166 Apr 30 00:44:18.030730 unknown[1207]: fetched base config from "system" Apr 30 00:44:18.030757 unknown[1207]: fetched base config from "system" Apr 30 00:44:18.030772 unknown[1207]: fetched user config from "aws" Apr 30 00:44:18.034182 ignition[1207]: fetch: fetch complete Apr 30 00:44:18.034195 ignition[1207]: fetch: fetch passed Apr 30 00:44:18.034664 ignition[1207]: Ignition finished successfully Apr 30 00:44:18.041428 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:44:18.052424 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:44:18.090574 ignition[1214]: Ignition 2.19.0 Apr 30 00:44:18.090594 ignition[1214]: Stage: kargs Apr 30 00:44:18.091784 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:44:18.091810 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:44:18.091959 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:44:18.095357 ignition[1214]: PUT result: OK Apr 30 00:44:18.105161 ignition[1214]: kargs: kargs passed Apr 30 00:44:18.105319 ignition[1214]: Ignition finished successfully Apr 30 00:44:18.109192 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:44:18.121456 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:44:18.152768 ignition[1220]: Ignition 2.19.0 Apr 30 00:44:18.152798 ignition[1220]: Stage: disks Apr 30 00:44:18.153688 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:44:18.153713 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:44:18.153862 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:44:18.156380 ignition[1220]: PUT result: OK Apr 30 00:44:18.166166 ignition[1220]: disks: disks passed Apr 30 00:44:18.166320 ignition[1220]: Ignition finished successfully Apr 30 00:44:18.170600 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:44:18.174029 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:44:18.176388 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:44:18.180425 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:44:18.182431 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:44:18.182498 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:44:18.200715 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:44:18.255401 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:44:18.261060 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:44:18.272385 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:44:18.371125 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:44:18.372070 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:44:18.375838 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:44:18.392313 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:44:18.402654 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:44:18.407806 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 00:44:18.412592 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:44:18.412695 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:44:18.426319 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:44:18.433377 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:44:18.457384 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1247) Apr 30 00:44:18.461534 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:44:18.461585 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:44:18.463518 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 00:44:18.479142 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 00:44:18.482287 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:44:18.536645 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:44:18.547388 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:44:18.555935 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:44:18.564555 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:44:18.720593 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:44:18.730300 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:44:18.746241 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:44:18.759184 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:44:18.763138 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:44:18.792759 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:44:18.802821 ignition[1361]: INFO : Ignition 2.19.0 Apr 30 00:44:18.802821 ignition[1361]: INFO : Stage: mount Apr 30 00:44:18.806073 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:44:18.806073 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:44:18.806073 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:44:18.813594 ignition[1361]: INFO : PUT result: OK Apr 30 00:44:18.817359 ignition[1361]: INFO : mount: mount passed Apr 30 00:44:18.818924 ignition[1361]: INFO : Ignition finished successfully Apr 30 00:44:18.821904 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:44:18.833568 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:44:18.851780 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:44:18.889125 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1372) Apr 30 00:44:18.893133 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:44:18.893178 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:44:18.893205 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 00:44:18.899144 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 00:44:18.902398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:44:18.939046 ignition[1389]: INFO : Ignition 2.19.0 Apr 30 00:44:18.939046 ignition[1389]: INFO : Stage: files Apr 30 00:44:18.942299 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:44:18.942299 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:44:18.946479 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:44:18.949474 ignition[1389]: INFO : PUT result: OK Apr 30 00:44:18.953969 ignition[1389]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:44:18.956921 ignition[1389]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:44:18.956921 ignition[1389]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:44:18.965162 ignition[1389]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:44:18.967865 ignition[1389]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:44:18.971312 unknown[1389]: wrote ssh authorized keys file for user: core Apr 30 00:44:18.973507 ignition[1389]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:44:18.978433 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:44:18.982084 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Apr 30 00:44:19.148418 systemd-networkd[1199]: eth0: Gained IPv6LL Apr 30 00:44:20.227442 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:44:26.719095 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:44:26.723538 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:44:26.723538 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:44:26.723538 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:44:26.723538 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:44:26.723538 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:44:26.723538 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:44:26.723538 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:44:26.723538 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:44:26.723538 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:44:26.753551 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:44:26.753551 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:44:26.753551 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:44:26.753551 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:44:26.753551 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Apr 30 00:44:27.245673 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 00:44:27.658532 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:44:27.658532 ignition[1389]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 00:44:27.665113 ignition[1389]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:44:27.665113 ignition[1389]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:44:27.665113 ignition[1389]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 00:44:27.665113 ignition[1389]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:44:27.665113 ignition[1389]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:44:27.665113 ignition[1389]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:44:27.680224 ignition[1389]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:44:27.680224 ignition[1389]: INFO : files: files passed Apr 30 00:44:27.680224 ignition[1389]: INFO : Ignition finished successfully Apr 30 00:44:27.674889 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:44:27.700396 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:44:27.720700 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:44:27.731372 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:44:27.731555 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:44:27.749815 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:44:27.749815 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:44:27.758926 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:44:27.766239 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:44:27.771745 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:44:27.789500 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:44:27.843882 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:44:27.844069 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:44:27.849451 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:44:27.853058 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:44:27.856722 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:44:27.875509 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:44:27.907081 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:44:27.916386 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:44:27.953537 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:44:27.957987 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:44:27.960494 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:44:27.962802 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:44:27.963349 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:44:27.973070 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:44:27.975352 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:44:27.980601 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:44:27.983571 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:44:27.986806 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:44:27.994286 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:44:27.996787 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:44:28.000896 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:44:28.004739 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:44:28.007315 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:44:28.010169 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:44:28.010392 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:44:28.014474 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:44:28.018158 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:44:28.020905 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:44:28.022172 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:44:28.024821 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:44:28.025087 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:44:28.031117 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:44:28.037736 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:44:28.047796 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:44:28.048008 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:44:28.063211 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:44:28.066300 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:44:28.066613 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:44:28.093591 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:44:28.093796 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:44:28.095366 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:44:28.096410 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:44:28.098223 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:44:28.108899 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:44:28.139378 ignition[1442]: INFO : Ignition 2.19.0 Apr 30 00:44:28.139378 ignition[1442]: INFO : Stage: umount Apr 30 00:44:28.139378 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:44:28.139378 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:44:28.139378 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:44:28.139378 ignition[1442]: INFO : PUT result: OK Apr 30 00:44:28.109168 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:44:28.169525 ignition[1442]: INFO : umount: umount passed Apr 30 00:44:28.169525 ignition[1442]: INFO : Ignition finished successfully Apr 30 00:44:28.143676 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:44:28.150593 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:44:28.150837 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:44:28.156830 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:44:28.156998 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:44:28.161247 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:44:28.161408 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:44:28.167151 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:44:28.167242 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:44:28.180760 systemd[1]: Stopped target network.target - Network. Apr 30 00:44:28.182564 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:44:28.182713 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:44:28.185079 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:44:28.186632 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:44:28.191680 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:44:28.194153 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:44:28.196891 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:44:28.203955 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:44:28.204083 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:44:28.207842 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:44:28.207919 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:44:28.220211 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:44:28.220319 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:44:28.222726 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:44:28.222806 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:44:28.226795 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:44:28.229592 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:44:28.239159 systemd-networkd[1199]: eth0: DHCPv6 lease lost Apr 30 00:44:28.242361 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:44:28.242627 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:44:28.253417 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:44:28.256126 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:44:28.281328 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:44:28.282198 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:44:28.300170 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:44:28.305791 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:44:28.305938 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:44:28.320413 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:44:28.320532 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:44:28.325068 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:44:28.326569 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:44:28.338171 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:44:28.338275 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:44:28.346858 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:44:28.349907 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:44:28.350128 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:44:28.367787 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:44:28.368694 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:44:28.378467 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:44:28.379159 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:44:28.383159 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:44:28.383236 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:44:28.385350 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:44:28.389288 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:44:28.391779 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:44:28.391865 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:44:28.402719 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:44:28.402809 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:44:28.405662 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:44:28.405746 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:44:28.422387 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:44:28.426082 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:44:28.426224 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:44:28.428962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:44:28.429045 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:44:28.432144 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:44:28.432449 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:44:28.467304 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:44:28.467714 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:44:28.475136 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:44:28.484556 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:44:28.505809 systemd[1]: Switching root. Apr 30 00:44:28.546461 systemd-journald[251]: Journal stopped Apr 30 00:44:30.336433 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 30 00:44:30.336573 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:44:30.336620 kernel: SELinux: policy capability open_perms=1 Apr 30 00:44:30.336651 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:44:30.336684 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:44:30.336715 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:44:30.336745 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:44:30.336784 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:44:30.336819 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:44:30.336849 kernel: audit: type=1403 audit(1745973868.770:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:44:30.336888 systemd[1]: Successfully loaded SELinux policy in 48.864ms. Apr 30 00:44:30.336935 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.840ms. Apr 30 00:44:30.336971 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:44:30.337004 systemd[1]: Detected virtualization amazon. Apr 30 00:44:30.337036 systemd[1]: Detected architecture arm64. Apr 30 00:44:30.337068 systemd[1]: Detected first boot. Apr 30 00:44:30.340196 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:44:30.340269 zram_generator::config[1485]: No configuration found. Apr 30 00:44:30.340306 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:44:30.340340 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:44:30.340371 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:44:30.340406 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:44:30.340438 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:44:30.340471 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:44:30.340503 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:44:30.340538 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:44:30.340568 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:44:30.340601 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:44:30.340633 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:44:30.340664 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:44:30.340695 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:44:30.340725 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:44:30.340757 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:44:30.340788 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:44:30.340822 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:44:30.340852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:44:30.340882 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 00:44:30.340911 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:44:30.340944 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:44:30.340973 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:44:30.341005 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:44:30.341039 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:44:30.341073 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:44:30.341122 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:44:30.341158 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:44:30.341191 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:44:30.341221 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:44:30.341251 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:44:30.341280 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:44:30.341309 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:44:30.341340 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:44:30.341376 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:44:30.341405 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:44:30.341436 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:44:30.341468 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:44:30.341497 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:44:30.341529 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:44:30.341563 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:44:30.341596 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:44:30.341631 systemd[1]: Reached target machines.target - Containers. Apr 30 00:44:30.341661 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:44:30.341691 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:44:30.341721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:44:30.341751 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:44:30.346149 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:44:30.346202 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:44:30.346233 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:44:30.346263 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:44:30.346304 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:44:30.346335 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:44:30.346365 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:44:30.346396 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:44:30.346426 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:44:30.346457 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:44:30.346487 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:44:30.346518 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:44:30.346552 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:44:30.346584 kernel: fuse: init (API version 7.39) Apr 30 00:44:30.346612 kernel: loop: module loaded Apr 30 00:44:30.346641 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:44:30.346680 kernel: ACPI: bus type drm_connector registered Apr 30 00:44:30.346713 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:44:30.346746 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:44:30.346777 systemd[1]: Stopped verity-setup.service. Apr 30 00:44:30.346809 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:44:30.346844 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:44:30.351143 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:44:30.351203 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:44:30.351251 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:44:30.351284 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:44:30.351323 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:44:30.351354 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:44:30.351384 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:44:30.351416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:44:30.351446 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:44:30.351476 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:44:30.351543 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:44:30.351618 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:44:30.351648 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:44:30.351684 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:44:30.351715 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:44:30.351789 systemd-journald[1567]: Collecting audit messages is disabled. Apr 30 00:44:30.351839 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:44:30.355205 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:44:30.355261 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:44:30.355345 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:44:30.355383 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:44:30.355415 systemd-journald[1567]: Journal started Apr 30 00:44:30.355469 systemd-journald[1567]: Runtime Journal (/run/log/journal/ec27f3e77ed6a0548f6bc26e8e473beb) is 8.0M, max 75.3M, 67.3M free. Apr 30 00:44:29.777771 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:44:29.805718 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 30 00:44:29.806562 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:44:30.362297 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:44:30.387395 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:44:30.401407 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:44:30.419366 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:44:30.421582 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:44:30.421640 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:44:30.429143 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:44:30.438584 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:44:30.447424 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:44:30.449664 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:44:30.454218 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:44:30.474701 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:44:30.477083 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:44:30.481371 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:44:30.483572 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:44:30.488080 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:44:30.494408 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:44:30.501821 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:44:30.504427 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:44:30.528327 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:44:30.545631 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:44:30.562072 systemd-journald[1567]: Time spent on flushing to /var/log/journal/ec27f3e77ed6a0548f6bc26e8e473beb is 175.172ms for 907 entries. Apr 30 00:44:30.562072 systemd-journald[1567]: System Journal (/var/log/journal/ec27f3e77ed6a0548f6bc26e8e473beb) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:44:30.767665 systemd-journald[1567]: Received client request to flush runtime journal. Apr 30 00:44:30.767751 kernel: loop0: detected capacity change from 0 to 114328 Apr 30 00:44:30.767802 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:44:30.767844 kernel: loop1: detected capacity change from 0 to 114432 Apr 30 00:44:30.562693 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:44:30.606193 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:44:30.608797 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:44:30.624847 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:44:30.655190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:44:30.745180 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:44:30.755160 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:44:30.766360 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:44:30.769209 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:44:30.776632 kernel: loop2: detected capacity change from 0 to 52536 Apr 30 00:44:30.786654 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:44:30.812952 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:44:30.830392 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:44:30.865324 kernel: loop3: detected capacity change from 0 to 201592 Apr 30 00:44:30.900437 udevadm[1638]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:44:30.907344 systemd-tmpfiles[1628]: ACLs are not supported, ignoring. Apr 30 00:44:30.907376 systemd-tmpfiles[1628]: ACLs are not supported, ignoring. Apr 30 00:44:30.923784 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:44:30.958660 kernel: loop4: detected capacity change from 0 to 114328 Apr 30 00:44:30.995839 kernel: loop5: detected capacity change from 0 to 114432 Apr 30 00:44:31.025290 kernel: loop6: detected capacity change from 0 to 52536 Apr 30 00:44:31.061128 kernel: loop7: detected capacity change from 0 to 201592 Apr 30 00:44:31.105500 (sd-merge)[1642]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 30 00:44:31.107739 (sd-merge)[1642]: Merged extensions into '/usr'. Apr 30 00:44:31.120260 systemd[1]: Reloading requested from client PID 1613 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:44:31.120297 systemd[1]: Reloading... Apr 30 00:44:31.332146 zram_generator::config[1667]: No configuration found. Apr 30 00:44:31.613151 ldconfig[1605]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:44:31.639547 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:44:31.757143 systemd[1]: Reloading finished in 633 ms. Apr 30 00:44:31.807971 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:44:31.813866 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:44:31.832670 systemd[1]: Starting ensure-sysext.service... Apr 30 00:44:31.847528 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:44:31.861002 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:44:31.873319 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:44:31.881380 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:44:31.881419 systemd[1]: Reloading... Apr 30 00:44:31.912935 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:44:31.914453 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:44:31.917388 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:44:31.918144 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Apr 30 00:44:31.918400 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Apr 30 00:44:31.926172 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:44:31.926562 systemd-tmpfiles[1721]: Skipping /boot Apr 30 00:44:31.956044 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:44:31.957382 systemd-tmpfiles[1721]: Skipping /boot Apr 30 00:44:32.009621 systemd-udevd[1723]: Using default interface naming scheme 'v255'. Apr 30 00:44:32.063172 zram_generator::config[1752]: No configuration found. Apr 30 00:44:32.287659 (udev-worker)[1756]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:44:32.454732 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1781) Apr 30 00:44:32.493952 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:44:32.680937 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 00:44:32.681309 systemd[1]: Reloading finished in 799 ms. Apr 30 00:44:32.712127 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:44:32.723294 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:44:32.793001 systemd[1]: Finished ensure-sysext.service. Apr 30 00:44:32.808487 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:44:32.825493 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 00:44:32.834432 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:44:32.847483 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:44:32.850015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:44:32.853608 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:44:32.866308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:44:32.872094 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:44:32.886462 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:44:32.892460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:44:32.894713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:44:32.900355 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:44:32.906896 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:44:32.920474 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:44:32.931554 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:44:32.934210 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:44:32.940470 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:44:32.950487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:44:32.955650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:44:32.958221 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:44:32.971777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:44:32.974252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:44:32.977585 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:44:32.978191 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:44:32.984990 lvm[1920]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:44:32.990851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:44:33.035598 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:44:33.040015 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:44:33.045826 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:44:33.068581 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:44:33.074824 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:44:33.075902 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:44:33.080278 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:44:33.100443 lvm[1946]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:44:33.112235 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:44:33.126081 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:44:33.148432 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:44:33.163430 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:44:33.165998 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:44:33.188499 augenrules[1960]: No rules Apr 30 00:44:33.196345 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:44:33.242548 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:44:33.245746 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:44:33.251251 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:44:33.270015 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:44:33.337218 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:44:33.412198 systemd-networkd[1933]: lo: Link UP Apr 30 00:44:33.412216 systemd-networkd[1933]: lo: Gained carrier Apr 30 00:44:33.416011 systemd-networkd[1933]: Enumeration completed Apr 30 00:44:33.416610 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:44:33.421187 systemd-resolved[1934]: Positive Trust Anchors: Apr 30 00:44:33.421213 systemd-resolved[1934]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:44:33.421276 systemd-resolved[1934]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:44:33.422943 systemd-networkd[1933]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:44:33.422962 systemd-networkd[1933]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:44:33.426009 systemd-networkd[1933]: eth0: Link UP Apr 30 00:44:33.426670 systemd-networkd[1933]: eth0: Gained carrier Apr 30 00:44:33.426837 systemd-networkd[1933]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:44:33.427494 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:44:33.436232 systemd-networkd[1933]: eth0: DHCPv4 address 172.31.25.63/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 00:44:33.441550 systemd-resolved[1934]: Defaulting to hostname 'linux'. Apr 30 00:44:33.445780 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:44:33.448140 systemd[1]: Reached target network.target - Network. Apr 30 00:44:33.449895 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:44:33.452195 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:44:33.454502 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:44:33.457018 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:44:33.459964 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:44:33.462400 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:44:33.467862 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:44:33.473291 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:44:33.473365 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:44:33.475804 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:44:33.480472 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:44:33.485600 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:44:33.497634 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:44:33.501216 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:44:33.503807 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:44:33.506246 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:44:33.508360 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:44:33.508424 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:44:33.515508 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:44:33.522561 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:44:33.533733 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:44:33.540922 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:44:33.557214 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:44:33.559226 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:44:33.563967 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:44:33.582433 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 00:44:33.592381 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:44:33.601296 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 00:44:33.607463 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:44:33.629615 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:44:33.640452 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:44:33.643272 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:44:33.646242 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:44:33.647613 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:44:33.653304 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:44:33.659980 jq[1984]: false Apr 30 00:44:33.675720 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:44:33.676174 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:44:33.691205 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:34 UTC 2025 (1): Starting Apr 30 00:44:33.691205 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 00:44:33.691205 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: ---------------------------------------------------- Apr 30 00:44:33.691205 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Apr 30 00:44:33.691205 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 00:44:33.691205 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: corporation. Support and training for ntp-4 are Apr 30 00:44:33.691205 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: available at https://www.nwtime.org/support Apr 30 00:44:33.691205 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: ---------------------------------------------------- Apr 30 00:44:33.690057 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:34 UTC 2025 (1): Starting Apr 30 00:44:33.690145 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 00:44:33.690168 ntpd[1987]: ---------------------------------------------------- Apr 30 00:44:33.690187 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Apr 30 00:44:33.690204 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 00:44:33.690222 ntpd[1987]: corporation. Support and training for ntp-4 are Apr 30 00:44:33.690246 ntpd[1987]: available at https://www.nwtime.org/support Apr 30 00:44:33.690266 ntpd[1987]: ---------------------------------------------------- Apr 30 00:44:33.704482 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: proto: precision = 0.096 usec (-23) Apr 30 00:44:33.704482 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: basedate set to 2025-04-17 Apr 30 00:44:33.704482 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: gps base set to 2025-04-20 (week 2363) Apr 30 00:44:33.701660 ntpd[1987]: proto: precision = 0.096 usec (-23) Apr 30 00:44:33.702083 ntpd[1987]: basedate set to 2025-04-17 Apr 30 00:44:33.702149 ntpd[1987]: gps base set to 2025-04-20 (week 2363) Apr 30 00:44:33.716743 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 00:44:33.719283 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 00:44:33.719283 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 00:44:33.719283 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 00:44:33.719283 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: Listen normally on 3 eth0 172.31.25.63:123 Apr 30 00:44:33.719283 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: Listen normally on 4 lo [::1]:123 Apr 30 00:44:33.719283 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: bind(21) AF_INET6 fe80::43b:efff:fec6:7219%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 00:44:33.719283 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: unable to create socket on eth0 (5) for fe80::43b:efff:fec6:7219%2#123 Apr 30 00:44:33.719283 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: failed to init interface for address fe80::43b:efff:fec6:7219%2 Apr 30 00:44:33.719283 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Apr 30 00:44:33.716859 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 00:44:33.717201 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 00:44:33.717266 ntpd[1987]: Listen normally on 3 eth0 172.31.25.63:123 Apr 30 00:44:33.717336 ntpd[1987]: Listen normally on 4 lo [::1]:123 Apr 30 00:44:33.717418 ntpd[1987]: bind(21) AF_INET6 fe80::43b:efff:fec6:7219%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 00:44:33.717457 ntpd[1987]: unable to create socket on eth0 (5) for fe80::43b:efff:fec6:7219%2#123 Apr 30 00:44:33.717484 ntpd[1987]: failed to init interface for address fe80::43b:efff:fec6:7219%2 Apr 30 00:44:33.717538 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Apr 30 00:44:33.724348 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:44:33.725298 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:44:33.725298 ntpd[1987]: 30 Apr 00:44:33 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:44:33.724415 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:44:33.739138 extend-filesystems[1985]: Found loop4 Apr 30 00:44:33.739138 extend-filesystems[1985]: Found loop5 Apr 30 00:44:33.739138 extend-filesystems[1985]: Found loop6 Apr 30 00:44:33.739138 extend-filesystems[1985]: Found loop7 Apr 30 00:44:33.739138 extend-filesystems[1985]: Found nvme0n1 Apr 30 00:44:33.739138 extend-filesystems[1985]: Found nvme0n1p1 Apr 30 00:44:33.739138 extend-filesystems[1985]: Found nvme0n1p2 Apr 30 00:44:33.739138 extend-filesystems[1985]: Found nvme0n1p3 Apr 30 00:44:33.739138 extend-filesystems[1985]: Found usr Apr 30 00:44:33.739138 extend-filesystems[1985]: Found nvme0n1p4 Apr 30 00:44:33.739138 extend-filesystems[1985]: Found nvme0n1p6 Apr 30 00:44:33.739138 extend-filesystems[1985]: Found nvme0n1p7 Apr 30 00:44:33.739138 extend-filesystems[1985]: Found nvme0n1p9 Apr 30 00:44:33.739138 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Apr 30 00:44:33.768828 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:44:33.769220 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:44:33.802367 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:44:33.796604 dbus-daemon[1983]: [system] SELinux support is enabled Apr 30 00:44:33.822076 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:44:33.822154 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:44:33.826341 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:44:33.826381 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:44:33.829604 dbus-daemon[1983]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1933 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 00:44:33.831028 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 00:44:33.831952 jq[1997]: true Apr 30 00:44:33.873718 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 00:44:33.881747 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Apr 30 00:44:33.890374 (ntainerd)[2017]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:44:33.895567 tar[2001]: linux-arm64/LICENSE Apr 30 00:44:33.895567 tar[2001]: linux-arm64/helm Apr 30 00:44:33.909288 extend-filesystems[2031]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:44:33.915611 update_engine[1996]: I20250430 00:44:33.909441 1996 main.cc:92] Flatcar Update Engine starting Apr 30 00:44:33.923541 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 30 00:44:33.929308 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:44:33.935707 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:44:33.942057 update_engine[1996]: I20250430 00:44:33.938555 1996 update_check_scheduler.cc:74] Next update check in 2m26s Apr 30 00:44:33.987775 coreos-metadata[1982]: Apr 30 00:44:33.986 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 00:44:33.992952 coreos-metadata[1982]: Apr 30 00:44:33.992 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 30 00:44:33.996169 coreos-metadata[1982]: Apr 30 00:44:33.995 INFO Fetch successful Apr 30 00:44:33.996169 coreos-metadata[1982]: Apr 30 00:44:33.995 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 30 00:44:33.999432 coreos-metadata[1982]: Apr 30 00:44:33.997 INFO Fetch successful Apr 30 00:44:33.999432 coreos-metadata[1982]: Apr 30 00:44:33.997 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 30 00:44:34.003586 coreos-metadata[1982]: Apr 30 00:44:34.003 INFO Fetch successful Apr 30 00:44:34.003586 coreos-metadata[1982]: Apr 30 00:44:34.003 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 30 00:44:34.004082 coreos-metadata[1982]: Apr 30 00:44:34.003 INFO Fetch successful Apr 30 00:44:34.004082 coreos-metadata[1982]: Apr 30 00:44:34.003 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 30 00:44:34.005981 coreos-metadata[1982]: Apr 30 00:44:34.005 INFO Fetch failed with 404: resource not found Apr 30 00:44:34.005981 coreos-metadata[1982]: Apr 30 00:44:34.005 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 30 00:44:34.010413 coreos-metadata[1982]: Apr 30 00:44:34.009 INFO Fetch successful Apr 30 00:44:34.010413 coreos-metadata[1982]: Apr 30 00:44:34.009 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 30 00:44:34.013709 coreos-metadata[1982]: Apr 30 00:44:34.013 INFO Fetch successful Apr 30 00:44:34.013709 coreos-metadata[1982]: Apr 30 00:44:34.013 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 30 00:44:34.014571 coreos-metadata[1982]: Apr 30 00:44:34.014 INFO Fetch successful Apr 30 00:44:34.014571 coreos-metadata[1982]: Apr 30 00:44:34.014 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 30 00:44:34.015998 coreos-metadata[1982]: Apr 30 00:44:34.015 INFO Fetch successful Apr 30 00:44:34.015998 coreos-metadata[1982]: Apr 30 00:44:34.015 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 30 00:44:34.021159 coreos-metadata[1982]: Apr 30 00:44:34.020 INFO Fetch successful Apr 30 00:44:34.030562 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:44:34.032253 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:44:34.074629 jq[2029]: true Apr 30 00:44:34.101040 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 30 00:44:34.099037 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:44:34.129847 extend-filesystems[2031]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 30 00:44:34.129847 extend-filesystems[2031]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 00:44:34.129847 extend-filesystems[2031]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 30 00:44:34.145454 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Apr 30 00:44:34.133144 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:44:34.136322 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:44:34.156349 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 00:44:34.249149 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1767) Apr 30 00:44:34.286734 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:44:34.292787 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:44:34.341662 systemd-logind[1995]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:44:34.341753 systemd-logind[1995]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 30 00:44:34.345272 systemd-logind[1995]: New seat seat0. Apr 30 00:44:34.349655 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:44:34.405072 bash[2076]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:44:34.412912 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:44:34.443972 containerd[2017]: time="2025-04-30T00:44:34.441567568Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:44:34.478346 systemd[1]: Starting sshkeys.service... Apr 30 00:44:34.523305 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 00:44:34.531760 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 00:44:34.591799 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 00:44:34.592119 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 00:44:34.603836 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2023 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 00:44:34.614447 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 00:44:34.644612 locksmithd[2032]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:44:34.662381 containerd[2017]: time="2025-04-30T00:44:34.649174733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:44:34.662381 containerd[2017]: time="2025-04-30T00:44:34.655923821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:44:34.662381 containerd[2017]: time="2025-04-30T00:44:34.655983017Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:44:34.662381 containerd[2017]: time="2025-04-30T00:44:34.656017793Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:44:34.662381 containerd[2017]: time="2025-04-30T00:44:34.656371301Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:44:34.662381 containerd[2017]: time="2025-04-30T00:44:34.656406245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:44:34.662381 containerd[2017]: time="2025-04-30T00:44:34.656524373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:44:34.662381 containerd[2017]: time="2025-04-30T00:44:34.656553761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:44:34.662381 containerd[2017]: time="2025-04-30T00:44:34.656838737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:44:34.662381 containerd[2017]: time="2025-04-30T00:44:34.656872685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:44:34.662381 containerd[2017]: time="2025-04-30T00:44:34.656902889Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:44:34.662894 containerd[2017]: time="2025-04-30T00:44:34.656927549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:44:34.662894 containerd[2017]: time="2025-04-30T00:44:34.659181341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:44:34.662894 containerd[2017]: time="2025-04-30T00:44:34.659668565Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:44:34.662894 containerd[2017]: time="2025-04-30T00:44:34.661076321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:44:34.662894 containerd[2017]: time="2025-04-30T00:44:34.661135421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:44:34.662894 containerd[2017]: time="2025-04-30T00:44:34.661356413Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:44:34.662894 containerd[2017]: time="2025-04-30T00:44:34.661459217Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:44:34.673708 containerd[2017]: time="2025-04-30T00:44:34.672732281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:44:34.673708 containerd[2017]: time="2025-04-30T00:44:34.672833885Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:44:34.673708 containerd[2017]: time="2025-04-30T00:44:34.672868709Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:44:34.673708 containerd[2017]: time="2025-04-30T00:44:34.672997853Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:44:34.673708 containerd[2017]: time="2025-04-30T00:44:34.673036409Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:44:34.673708 containerd[2017]: time="2025-04-30T00:44:34.673310621Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:44:34.678130 containerd[2017]: time="2025-04-30T00:44:34.675704297Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:44:34.678130 containerd[2017]: time="2025-04-30T00:44:34.675981329Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:44:34.678130 containerd[2017]: time="2025-04-30T00:44:34.676019261Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:44:34.678130 containerd[2017]: time="2025-04-30T00:44:34.676067873Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678427649Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678492737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678525053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678564305Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678598805Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678628181Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678657185Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678686213Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678729101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678761105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678792833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678823769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678853877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.679140 containerd[2017]: time="2025-04-30T00:44:34.678885653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.679865 containerd[2017]: time="2025-04-30T00:44:34.678913829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.679865 containerd[2017]: time="2025-04-30T00:44:34.678946865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.679865 containerd[2017]: time="2025-04-30T00:44:34.678978665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.679865 containerd[2017]: time="2025-04-30T00:44:34.679032665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.679865 containerd[2017]: time="2025-04-30T00:44:34.679063673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.679094981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.680243585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.680284373Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.680333261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.680362097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.680388749Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.681320933Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.682538237Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.682589957Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.682627817Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.682652969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.682685261Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.682712177Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:44:34.683149 containerd[2017]: time="2025-04-30T00:44:34.682749125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:44:34.688486 containerd[2017]: time="2025-04-30T00:44:34.687427277Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:44:34.688486 containerd[2017]: time="2025-04-30T00:44:34.687558545Z" level=info msg="Connect containerd service" Apr 30 00:44:34.688486 containerd[2017]: time="2025-04-30T00:44:34.687621281Z" level=info msg="using legacy CRI server" Apr 30 00:44:34.688486 containerd[2017]: time="2025-04-30T00:44:34.687639857Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:44:34.688486 containerd[2017]: time="2025-04-30T00:44:34.687837977Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:44:34.690373 containerd[2017]: time="2025-04-30T00:44:34.690323537Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:44:34.690859 ntpd[1987]: bind(24) AF_INET6 fe80::43b:efff:fec6:7219%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 00:44:34.690923 ntpd[1987]: unable to create socket on eth0 (6) for fe80::43b:efff:fec6:7219%2#123 Apr 30 00:44:34.691359 ntpd[1987]: 30 Apr 00:44:34 ntpd[1987]: bind(24) AF_INET6 fe80::43b:efff:fec6:7219%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 00:44:34.691359 ntpd[1987]: 30 Apr 00:44:34 ntpd[1987]: unable to create socket on eth0 (6) for fe80::43b:efff:fec6:7219%2#123 Apr 30 00:44:34.691359 ntpd[1987]: 30 Apr 00:44:34 ntpd[1987]: failed to init interface for address fe80::43b:efff:fec6:7219%2 Apr 30 00:44:34.690952 ntpd[1987]: failed to init interface for address fe80::43b:efff:fec6:7219%2 Apr 30 00:44:34.694137 containerd[2017]: time="2025-04-30T00:44:34.692957333Z" level=info msg="Start subscribing containerd event" Apr 30 00:44:34.694137 containerd[2017]: time="2025-04-30T00:44:34.693059729Z" level=info msg="Start recovering state" Apr 30 00:44:34.694137 containerd[2017]: time="2025-04-30T00:44:34.693206081Z" level=info msg="Start event monitor" Apr 30 00:44:34.694137 containerd[2017]: time="2025-04-30T00:44:34.693233141Z" level=info msg="Start snapshots syncer" Apr 30 00:44:34.694137 containerd[2017]: time="2025-04-30T00:44:34.693254501Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:44:34.694137 containerd[2017]: time="2025-04-30T00:44:34.693275141Z" level=info msg="Start streaming server" Apr 30 00:44:34.700914 containerd[2017]: time="2025-04-30T00:44:34.698327465Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:44:34.700914 containerd[2017]: time="2025-04-30T00:44:34.698436701Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:44:34.700914 containerd[2017]: time="2025-04-30T00:44:34.698544677Z" level=info msg="containerd successfully booted in 0.259127s" Apr 30 00:44:34.698725 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:44:34.703733 polkitd[2127]: Started polkitd version 121 Apr 30 00:44:34.726420 polkitd[2127]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 00:44:34.726553 polkitd[2127]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 00:44:34.730163 polkitd[2127]: Finished loading, compiling and executing 2 rules Apr 30 00:44:34.732556 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 00:44:34.733390 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 00:44:34.738937 polkitd[2127]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 00:44:34.779523 systemd-resolved[1934]: System hostname changed to 'ip-172-31-25-63'. Apr 30 00:44:34.779532 systemd-hostnamed[2023]: Hostname set to (transient) Apr 30 00:44:34.856782 coreos-metadata[2116]: Apr 30 00:44:34.856 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 00:44:34.859819 coreos-metadata[2116]: Apr 30 00:44:34.858 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 30 00:44:34.860408 coreos-metadata[2116]: Apr 30 00:44:34.860 INFO Fetch successful Apr 30 00:44:34.860408 coreos-metadata[2116]: Apr 30 00:44:34.860 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 00:44:34.863980 coreos-metadata[2116]: Apr 30 00:44:34.861 INFO Fetch successful Apr 30 00:44:34.867245 unknown[2116]: wrote ssh authorized keys file for user: core Apr 30 00:44:34.956316 systemd-networkd[1933]: eth0: Gained IPv6LL Apr 30 00:44:34.957898 update-ssh-keys[2178]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:44:34.967194 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 00:44:34.981229 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:44:34.984877 systemd[1]: Finished sshkeys.service. Apr 30 00:44:35.002543 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:44:35.016518 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 30 00:44:35.021729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:35.037621 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:44:35.172246 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:44:35.210784 amazon-ssm-agent[2188]: Initializing new seelog logger Apr 30 00:44:35.215398 amazon-ssm-agent[2188]: New Seelog Logger Creation Complete Apr 30 00:44:35.215398 amazon-ssm-agent[2188]: 2025/04/30 00:44:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:44:35.215398 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:44:35.215398 amazon-ssm-agent[2188]: 2025/04/30 00:44:35 processing appconfig overrides Apr 30 00:44:35.219133 amazon-ssm-agent[2188]: 2025/04/30 00:44:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:44:35.221141 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:44:35.222148 amazon-ssm-agent[2188]: 2025/04/30 00:44:35 processing appconfig overrides Apr 30 00:44:35.222148 amazon-ssm-agent[2188]: 2025/04/30 00:44:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:44:35.222148 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:44:35.222148 amazon-ssm-agent[2188]: 2025/04/30 00:44:35 processing appconfig overrides Apr 30 00:44:35.226132 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO Proxy environment variables: Apr 30 00:44:35.232262 amazon-ssm-agent[2188]: 2025/04/30 00:44:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:44:35.232262 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:44:35.232481 amazon-ssm-agent[2188]: 2025/04/30 00:44:35 processing appconfig overrides Apr 30 00:44:35.333201 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO https_proxy: Apr 30 00:44:35.435495 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO http_proxy: Apr 30 00:44:35.534755 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO no_proxy: Apr 30 00:44:35.635090 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO Checking if agent identity type OnPrem can be assumed Apr 30 00:44:35.733617 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO Checking if agent identity type EC2 can be assumed Apr 30 00:44:35.833795 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO Agent will take identity from EC2 Apr 30 00:44:35.932387 tar[2001]: linux-arm64/README.md Apr 30 00:44:35.934729 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 00:44:35.974086 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:44:36.033313 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 00:44:36.133126 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 00:44:36.232294 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 30 00:44:36.333125 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 30 00:44:36.434223 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO [amazon-ssm-agent] Starting Core Agent Apr 30 00:44:36.534530 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 30 00:44:36.636121 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO [Registrar] Starting registrar module Apr 30 00:44:36.636121 amazon-ssm-agent[2188]: 2025-04-30 00:44:35 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 30 00:44:36.636121 amazon-ssm-agent[2188]: 2025-04-30 00:44:36 INFO [EC2Identity] EC2 registration was successful. Apr 30 00:44:36.636121 amazon-ssm-agent[2188]: 2025-04-30 00:44:36 INFO [CredentialRefresher] credentialRefresher has started Apr 30 00:44:36.636121 amazon-ssm-agent[2188]: 2025-04-30 00:44:36 INFO [CredentialRefresher] Starting credentials refresher loop Apr 30 00:44:36.636121 amazon-ssm-agent[2188]: 2025-04-30 00:44:36 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 30 00:44:36.636596 amazon-ssm-agent[2188]: 2025-04-30 00:44:36 INFO [CredentialRefresher] Next credential rotation will be in 31.56664779606667 minutes Apr 30 00:44:36.674041 sshd_keygen[2013]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:44:36.728379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:36.734183 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:44:36.745705 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:44:36.749444 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:44:36.754982 systemd[1]: Started sshd@0-172.31.25.63:22-147.75.109.163:58070.service - OpenSSH per-connection server daemon (147.75.109.163:58070). Apr 30 00:44:36.797783 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:44:36.798200 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:44:36.821645 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:44:36.842975 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:44:36.855091 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:44:36.865826 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 00:44:36.868503 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:44:36.871052 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:44:36.873671 systemd[1]: Startup finished in 1.161s (kernel) + 14.972s (initrd) + 8.150s (userspace) = 24.284s. Apr 30 00:44:37.058784 sshd[2225]: Accepted publickey for core from 147.75.109.163 port 58070 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:37.063478 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:37.081319 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:44:37.087698 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:44:37.095632 systemd-logind[1995]: New session 1 of user core. Apr 30 00:44:37.123686 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:44:37.135792 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:44:37.154508 (systemd)[2244]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:44:37.388308 systemd[2244]: Queued start job for default target default.target. Apr 30 00:44:37.397959 systemd[2244]: Created slice app.slice - User Application Slice. Apr 30 00:44:37.398028 systemd[2244]: Reached target paths.target - Paths. Apr 30 00:44:37.398060 systemd[2244]: Reached target timers.target - Timers. Apr 30 00:44:37.400894 systemd[2244]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:44:37.442444 systemd[2244]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:44:37.442671 systemd[2244]: Reached target sockets.target - Sockets. Apr 30 00:44:37.442704 systemd[2244]: Reached target basic.target - Basic System. Apr 30 00:44:37.442785 systemd[2244]: Reached target default.target - Main User Target. Apr 30 00:44:37.442849 systemd[2244]: Startup finished in 274ms. Apr 30 00:44:37.443144 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:44:37.452399 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:44:37.678814 systemd[1]: Started sshd@1-172.31.25.63:22-147.75.109.163:43334.service - OpenSSH per-connection server daemon (147.75.109.163:43334). Apr 30 00:44:37.690877 ntpd[1987]: Listen normally on 7 eth0 [fe80::43b:efff:fec6:7219%2]:123 Apr 30 00:44:37.692845 ntpd[1987]: 30 Apr 00:44:37 ntpd[1987]: Listen normally on 7 eth0 [fe80::43b:efff:fec6:7219%2]:123 Apr 30 00:44:37.702047 amazon-ssm-agent[2188]: 2025-04-30 00:44:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 30 00:44:37.729741 kubelet[2222]: E0430 00:44:37.729680 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:44:37.733746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:44:37.734062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:44:37.735066 systemd[1]: kubelet.service: Consumed 1.313s CPU time. Apr 30 00:44:37.806017 amazon-ssm-agent[2188]: 2025-04-30 00:44:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2259) started Apr 30 00:44:37.906566 amazon-ssm-agent[2188]: 2025-04-30 00:44:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 30 00:44:37.969528 sshd[2258]: Accepted publickey for core from 147.75.109.163 port 43334 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:37.972879 sshd[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:37.982417 systemd-logind[1995]: New session 2 of user core. Apr 30 00:44:37.990379 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:44:38.168726 sshd[2258]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:38.175873 systemd-logind[1995]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:44:38.177651 systemd[1]: sshd@1-172.31.25.63:22-147.75.109.163:43334.service: Deactivated successfully. Apr 30 00:44:38.181037 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:44:38.183173 systemd-logind[1995]: Removed session 2. Apr 30 00:44:38.219615 systemd[1]: Started sshd@2-172.31.25.63:22-147.75.109.163:43346.service - OpenSSH per-connection server daemon (147.75.109.163:43346). Apr 30 00:44:38.477635 sshd[2275]: Accepted publickey for core from 147.75.109.163 port 43346 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:38.480223 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:38.488767 systemd-logind[1995]: New session 3 of user core. Apr 30 00:44:38.498347 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:44:38.668890 sshd[2275]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:38.675389 systemd[1]: sshd@2-172.31.25.63:22-147.75.109.163:43346.service: Deactivated successfully. Apr 30 00:44:38.680681 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:44:38.683335 systemd-logind[1995]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:44:38.685199 systemd-logind[1995]: Removed session 3. Apr 30 00:44:38.726627 systemd[1]: Started sshd@3-172.31.25.63:22-147.75.109.163:43362.service - OpenSSH per-connection server daemon (147.75.109.163:43362). Apr 30 00:44:38.999649 sshd[2282]: Accepted publickey for core from 147.75.109.163 port 43362 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:39.005017 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:39.016419 systemd-logind[1995]: New session 4 of user core. Apr 30 00:44:39.022410 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:44:39.200676 sshd[2282]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:39.208595 systemd-logind[1995]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:44:39.211428 systemd[1]: sshd@3-172.31.25.63:22-147.75.109.163:43362.service: Deactivated successfully. Apr 30 00:44:39.215145 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:44:39.218976 systemd-logind[1995]: Removed session 4. Apr 30 00:44:39.253575 systemd[1]: Started sshd@4-172.31.25.63:22-147.75.109.163:43378.service - OpenSSH per-connection server daemon (147.75.109.163:43378). Apr 30 00:44:39.523034 sshd[2289]: Accepted publickey for core from 147.75.109.163 port 43378 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:39.525753 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:39.533821 systemd-logind[1995]: New session 5 of user core. Apr 30 00:44:39.545457 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:44:39.697689 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:44:39.698986 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:39.715687 sudo[2292]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:39.753908 sshd[2289]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:39.760070 systemd[1]: sshd@4-172.31.25.63:22-147.75.109.163:43378.service: Deactivated successfully. Apr 30 00:44:39.763659 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:44:39.766630 systemd-logind[1995]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:44:39.768941 systemd-logind[1995]: Removed session 5. Apr 30 00:44:39.803458 systemd[1]: Started sshd@5-172.31.25.63:22-147.75.109.163:43392.service - OpenSSH per-connection server daemon (147.75.109.163:43392). Apr 30 00:44:40.073418 sshd[2297]: Accepted publickey for core from 147.75.109.163 port 43392 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:40.075821 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:40.085182 systemd-logind[1995]: New session 6 of user core. Apr 30 00:44:40.092378 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:44:40.236673 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:44:40.238426 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:40.246415 sudo[2301]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:40.257868 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:44:40.258893 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:40.286152 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:44:40.290199 auditctl[2304]: No rules Apr 30 00:44:40.290910 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:44:40.291318 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:44:40.304933 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:44:40.352066 augenrules[2322]: No rules Apr 30 00:44:40.356241 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:44:40.359268 sudo[2300]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:40.397386 sshd[2297]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:40.403722 systemd-logind[1995]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:44:40.405142 systemd[1]: sshd@5-172.31.25.63:22-147.75.109.163:43392.service: Deactivated successfully. Apr 30 00:44:40.407906 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:44:40.410724 systemd-logind[1995]: Removed session 6. Apr 30 00:44:40.454578 systemd[1]: Started sshd@6-172.31.25.63:22-147.75.109.163:43394.service - OpenSSH per-connection server daemon (147.75.109.163:43394). Apr 30 00:44:40.863026 systemd-resolved[1934]: Clock change detected. Flushing caches. Apr 30 00:44:40.878376 sshd[2330]: Accepted publickey for core from 147.75.109.163 port 43394 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:40.881398 sshd[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:40.889642 systemd-logind[1995]: New session 7 of user core. Apr 30 00:44:40.900382 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:44:41.037616 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:44:41.038287 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:41.470596 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:44:41.473169 (dockerd)[2348]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:44:41.827602 dockerd[2348]: time="2025-04-30T00:44:41.827225260Z" level=info msg="Starting up" Apr 30 00:44:41.929220 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1893083165-merged.mount: Deactivated successfully. Apr 30 00:44:42.015290 systemd[1]: var-lib-docker-metacopy\x2dcheck1840563339-merged.mount: Deactivated successfully. Apr 30 00:44:42.032533 dockerd[2348]: time="2025-04-30T00:44:42.032478889Z" level=info msg="Loading containers: start." Apr 30 00:44:42.195145 kernel: Initializing XFRM netlink socket Apr 30 00:44:42.228664 (udev-worker)[2370]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:44:42.325442 systemd-networkd[1933]: docker0: Link UP Apr 30 00:44:42.349619 dockerd[2348]: time="2025-04-30T00:44:42.349548507Z" level=info msg="Loading containers: done." Apr 30 00:44:42.379797 dockerd[2348]: time="2025-04-30T00:44:42.379717431Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:44:42.380054 dockerd[2348]: time="2025-04-30T00:44:42.379879911Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:44:42.380140 dockerd[2348]: time="2025-04-30T00:44:42.380090427Z" level=info msg="Daemon has completed initialization" Apr 30 00:44:42.450773 dockerd[2348]: time="2025-04-30T00:44:42.450447723Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:44:42.450962 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:44:42.921059 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck792664470-merged.mount: Deactivated successfully. Apr 30 00:44:43.592157 containerd[2017]: time="2025-04-30T00:44:43.591900989Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 00:44:44.179948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1850136723.mount: Deactivated successfully. Apr 30 00:44:46.338132 containerd[2017]: time="2025-04-30T00:44:46.336494047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:46.339815 containerd[2017]: time="2025-04-30T00:44:46.339756307Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233118" Apr 30 00:44:46.342895 containerd[2017]: time="2025-04-30T00:44:46.342847759Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:46.350080 containerd[2017]: time="2025-04-30T00:44:46.350026927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:46.351661 containerd[2017]: time="2025-04-30T00:44:46.351600091Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.759635814s" Apr 30 00:44:46.351750 containerd[2017]: time="2025-04-30T00:44:46.351661759Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" Apr 30 00:44:46.352591 containerd[2017]: time="2025-04-30T00:44:46.352466719Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 00:44:47.735604 containerd[2017]: time="2025-04-30T00:44:47.735533998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:47.737810 containerd[2017]: time="2025-04-30T00:44:47.737741818Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529571" Apr 30 00:44:47.738830 containerd[2017]: time="2025-04-30T00:44:47.738745462Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:47.744633 containerd[2017]: time="2025-04-30T00:44:47.744577522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:47.747132 containerd[2017]: time="2025-04-30T00:44:47.746915890Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.394390071s" Apr 30 00:44:47.747132 containerd[2017]: time="2025-04-30T00:44:47.746970934Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" Apr 30 00:44:47.748826 containerd[2017]: time="2025-04-30T00:44:47.748784878Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 00:44:48.156629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:44:48.165681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:48.533435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:48.544684 (kubelet)[2561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:44:48.629649 kubelet[2561]: E0430 00:44:48.629188 2561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:44:48.635391 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:44:48.635827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:44:49.087078 containerd[2017]: time="2025-04-30T00:44:49.084166964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:49.090276 containerd[2017]: time="2025-04-30T00:44:49.090221648Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482173" Apr 30 00:44:49.092914 containerd[2017]: time="2025-04-30T00:44:49.092854112Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:49.103992 containerd[2017]: time="2025-04-30T00:44:49.103921688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:49.105658 containerd[2017]: time="2025-04-30T00:44:49.105589316Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.356551418s" Apr 30 00:44:49.105658 containerd[2017]: time="2025-04-30T00:44:49.105652544Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" Apr 30 00:44:49.106714 containerd[2017]: time="2025-04-30T00:44:49.106663184Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 00:44:50.370444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92984256.mount: Deactivated successfully. Apr 30 00:44:50.888470 containerd[2017]: time="2025-04-30T00:44:50.888391405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:50.889829 containerd[2017]: time="2025-04-30T00:44:50.889774369Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370351" Apr 30 00:44:50.892315 containerd[2017]: time="2025-04-30T00:44:50.892227697Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:50.897101 containerd[2017]: time="2025-04-30T00:44:50.897022129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:50.898551 containerd[2017]: time="2025-04-30T00:44:50.898338529Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.791617133s" Apr 30 00:44:50.898551 containerd[2017]: time="2025-04-30T00:44:50.898395157Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" Apr 30 00:44:50.899329 containerd[2017]: time="2025-04-30T00:44:50.899273161Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 00:44:51.430315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2010975754.mount: Deactivated successfully. Apr 30 00:44:52.581992 containerd[2017]: time="2025-04-30T00:44:52.581903102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:52.584361 containerd[2017]: time="2025-04-30T00:44:52.584295050Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Apr 30 00:44:52.585152 containerd[2017]: time="2025-04-30T00:44:52.584890646Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:52.592159 containerd[2017]: time="2025-04-30T00:44:52.591766550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:52.594470 containerd[2017]: time="2025-04-30T00:44:52.594259538Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.694924793s" Apr 30 00:44:52.594470 containerd[2017]: time="2025-04-30T00:44:52.594324098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Apr 30 00:44:52.595840 containerd[2017]: time="2025-04-30T00:44:52.595254878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 00:44:53.126035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054470337.mount: Deactivated successfully. Apr 30 00:44:53.134174 containerd[2017]: time="2025-04-30T00:44:53.133760412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:53.135476 containerd[2017]: time="2025-04-30T00:44:53.135410664Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 30 00:44:53.136310 containerd[2017]: time="2025-04-30T00:44:53.136218564Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:53.140534 containerd[2017]: time="2025-04-30T00:44:53.140439048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:53.142485 containerd[2017]: time="2025-04-30T00:44:53.142245312Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 546.928358ms" Apr 30 00:44:53.142485 containerd[2017]: time="2025-04-30T00:44:53.142303344Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 30 00:44:53.143260 containerd[2017]: time="2025-04-30T00:44:53.142955676Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 00:44:53.680572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651743063.mount: Deactivated successfully. Apr 30 00:44:55.922657 containerd[2017]: time="2025-04-30T00:44:55.922186398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:55.924166 containerd[2017]: time="2025-04-30T00:44:55.924064434Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Apr 30 00:44:55.925220 containerd[2017]: time="2025-04-30T00:44:55.925084314Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:55.933160 containerd[2017]: time="2025-04-30T00:44:55.932261646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:55.935839 containerd[2017]: time="2025-04-30T00:44:55.935071266Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.792059502s" Apr 30 00:44:55.935839 containerd[2017]: time="2025-04-30T00:44:55.935154486Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Apr 30 00:44:58.886044 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:44:58.894605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:59.239517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:59.244513 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:44:59.327345 kubelet[2713]: E0430 00:44:59.327280 2713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:44:59.332060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:44:59.333538 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:45:03.344946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:45:03.352641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:45:03.411743 systemd[1]: Reloading requested from client PID 2728 ('systemctl') (unit session-7.scope)... Apr 30 00:45:03.411777 systemd[1]: Reloading... Apr 30 00:45:03.645473 zram_generator::config[2772]: No configuration found. Apr 30 00:45:03.894269 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:45:04.069473 systemd[1]: Reloading finished in 656 ms. Apr 30 00:45:04.167097 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:45:04.167312 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:45:04.167774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:45:04.172835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:45:04.471017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:45:04.489680 (kubelet)[2832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:45:04.562424 kubelet[2832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:45:04.562424 kubelet[2832]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:45:04.562424 kubelet[2832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:45:04.564135 kubelet[2832]: I0430 00:45:04.562381 2832 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:45:04.982895 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 00:45:05.928826 kubelet[2832]: I0430 00:45:05.928751 2832 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:45:05.929483 kubelet[2832]: I0430 00:45:05.929459 2832 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:45:05.930069 kubelet[2832]: I0430 00:45:05.930044 2832 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:45:05.977037 kubelet[2832]: E0430 00:45:05.976971 2832 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:45:05.984168 kubelet[2832]: I0430 00:45:05.984006 2832 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:45:05.996557 kubelet[2832]: E0430 00:45:05.996510 2832 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:45:05.997207 kubelet[2832]: I0430 00:45:05.996793 2832 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:45:06.001998 kubelet[2832]: I0430 00:45:06.001964 2832 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:45:06.003357 kubelet[2832]: I0430 00:45:06.002609 2832 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:45:06.003357 kubelet[2832]: I0430 00:45:06.002655 2832 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-63","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:45:06.003357 kubelet[2832]: I0430 00:45:06.002973 2832 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:45:06.003357 kubelet[2832]: I0430 00:45:06.002993 2832 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:45:06.003730 kubelet[2832]: I0430 00:45:06.003231 2832 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:45:06.009960 kubelet[2832]: I0430 00:45:06.009786 2832 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:45:06.009960 kubelet[2832]: I0430 00:45:06.009831 2832 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:45:06.009960 kubelet[2832]: I0430 00:45:06.009869 2832 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:45:06.009960 kubelet[2832]: I0430 00:45:06.009889 2832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:45:06.016148 kubelet[2832]: W0430 00:45:06.014312 2832 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-63&limit=500&resourceVersion=0": dial tcp 172.31.25.63:6443: connect: connection refused Apr 30 00:45:06.016148 kubelet[2832]: E0430 00:45:06.014403 2832 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-63&limit=500&resourceVersion=0\": dial tcp 172.31.25.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:45:06.016148 kubelet[2832]: W0430 00:45:06.015054 2832 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.63:6443: connect: connection refused Apr 30 00:45:06.016148 kubelet[2832]: E0430 00:45:06.015151 2832 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:45:06.016148 kubelet[2832]: I0430 00:45:06.015291 2832 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:45:06.016566 kubelet[2832]: I0430 00:45:06.016094 2832 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:45:06.016771 kubelet[2832]: W0430 00:45:06.016749 2832 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:45:06.019805 kubelet[2832]: I0430 00:45:06.019768 2832 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:45:06.020038 kubelet[2832]: I0430 00:45:06.020020 2832 server.go:1287] "Started kubelet" Apr 30 00:45:06.025335 kubelet[2832]: I0430 00:45:06.025275 2832 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:45:06.027201 kubelet[2832]: I0430 00:45:06.027156 2832 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:45:06.031402 kubelet[2832]: I0430 00:45:06.031309 2832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:45:06.032024 kubelet[2832]: I0430 00:45:06.031993 2832 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:45:06.032680 kubelet[2832]: E0430 00:45:06.032466 2832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.63:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.63:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-63.183af204576c2a80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-63,UID:ip-172-31-25-63,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-63,},FirstTimestamp:2025-04-30 00:45:06.019986048 +0000 UTC m=+1.523331056,LastTimestamp:2025-04-30 00:45:06.019986048 +0000 UTC m=+1.523331056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-63,}" Apr 30 00:45:06.034030 kubelet[2832]: I0430 00:45:06.033979 2832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:45:06.034751 kubelet[2832]: I0430 00:45:06.034719 2832 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:45:06.040947 kubelet[2832]: E0430 00:45:06.040898 2832 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-25-63\" not found" Apr 30 00:45:06.041502 kubelet[2832]: I0430 00:45:06.041474 2832 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:45:06.042035 kubelet[2832]: I0430 00:45:06.042006 2832 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:45:06.042278 kubelet[2832]: I0430 00:45:06.042258 2832 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:45:06.044135 kubelet[2832]: W0430 00:45:06.044029 2832 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.63:6443: connect: connection refused Apr 30 00:45:06.045164 kubelet[2832]: E0430 00:45:06.044685 2832 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:45:06.045426 kubelet[2832]: I0430 00:45:06.045380 2832 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:45:06.045929 kubelet[2832]: E0430 00:45:06.045880 2832 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:45:06.046141 kubelet[2832]: I0430 00:45:06.046081 2832 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:45:06.049884 kubelet[2832]: E0430 00:45:06.049821 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-63?timeout=10s\": dial tcp 172.31.25.63:6443: connect: connection refused" interval="200ms" Apr 30 00:45:06.050828 kubelet[2832]: I0430 00:45:06.050792 2832 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:45:06.081330 kubelet[2832]: I0430 00:45:06.081014 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:45:06.091624 kubelet[2832]: I0430 00:45:06.091566 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:45:06.091771 kubelet[2832]: I0430 00:45:06.091615 2832 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:45:06.091771 kubelet[2832]: I0430 00:45:06.091681 2832 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:45:06.091771 kubelet[2832]: I0430 00:45:06.091698 2832 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:45:06.091931 kubelet[2832]: E0430 00:45:06.091797 2832 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:45:06.099665 kubelet[2832]: W0430 00:45:06.099595 2832 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.63:6443: connect: connection refused Apr 30 00:45:06.099862 kubelet[2832]: E0430 00:45:06.099675 2832 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:45:06.102240 kubelet[2832]: I0430 00:45:06.102204 2832 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:45:06.102748 kubelet[2832]: I0430 00:45:06.102406 2832 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:45:06.102748 kubelet[2832]: I0430 00:45:06.102442 2832 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:45:06.108468 kubelet[2832]: I0430 00:45:06.108433 2832 policy_none.go:49] "None policy: Start" Apr 30 00:45:06.108680 kubelet[2832]: I0430 00:45:06.108659 2832 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:45:06.108790 kubelet[2832]: I0430 00:45:06.108772 2832 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:45:06.121945 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:45:06.140225 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:45:06.141858 kubelet[2832]: E0430 00:45:06.141777 2832 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-25-63\" not found" Apr 30 00:45:06.147016 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:45:06.158462 kubelet[2832]: I0430 00:45:06.157879 2832 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:45:06.158462 kubelet[2832]: I0430 00:45:06.158227 2832 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:45:06.158462 kubelet[2832]: I0430 00:45:06.158249 2832 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:45:06.158697 kubelet[2832]: I0430 00:45:06.158565 2832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:45:06.161635 kubelet[2832]: E0430 00:45:06.161597 2832 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:45:06.162049 kubelet[2832]: E0430 00:45:06.161853 2832 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-63\" not found" Apr 30 00:45:06.211474 systemd[1]: Created slice kubepods-burstable-pod8bc0faba0592823abf7bc23746ee31ae.slice - libcontainer container kubepods-burstable-pod8bc0faba0592823abf7bc23746ee31ae.slice. Apr 30 00:45:06.229885 kubelet[2832]: E0430 00:45:06.229464 2832 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-63\" not found" node="ip-172-31-25-63" Apr 30 00:45:06.237365 systemd[1]: Created slice kubepods-burstable-pod302c97664c7ece75cb98f3177ec7bf63.slice - libcontainer container kubepods-burstable-pod302c97664c7ece75cb98f3177ec7bf63.slice. Apr 30 00:45:06.242407 kubelet[2832]: E0430 00:45:06.242007 2832 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-63\" not found" node="ip-172-31-25-63" Apr 30 00:45:06.244519 kubelet[2832]: I0430 00:45:06.244477 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/302c97664c7ece75cb98f3177ec7bf63-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-63\" (UID: \"302c97664c7ece75cb98f3177ec7bf63\") " pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:06.244738 kubelet[2832]: I0430 00:45:06.244713 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/302c97664c7ece75cb98f3177ec7bf63-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-63\" (UID: \"302c97664c7ece75cb98f3177ec7bf63\") " pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:06.245169 kubelet[2832]: I0430 00:45:06.244856 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/302c97664c7ece75cb98f3177ec7bf63-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-63\" (UID: \"302c97664c7ece75cb98f3177ec7bf63\") " pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:06.245169 kubelet[2832]: I0430 00:45:06.244901 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/302c97664c7ece75cb98f3177ec7bf63-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-63\" (UID: \"302c97664c7ece75cb98f3177ec7bf63\") " pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:06.245169 kubelet[2832]: I0430 00:45:06.244942 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/302c97664c7ece75cb98f3177ec7bf63-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-63\" (UID: \"302c97664c7ece75cb98f3177ec7bf63\") " pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:06.245169 kubelet[2832]: I0430 00:45:06.244983 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97fb36f7f65f4e7de00813dd03ddf43a-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-63\" (UID: \"97fb36f7f65f4e7de00813dd03ddf43a\") " pod="kube-system/kube-scheduler-ip-172-31-25-63" Apr 30 00:45:06.245169 kubelet[2832]: I0430 00:45:06.245018 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bc0faba0592823abf7bc23746ee31ae-ca-certs\") pod \"kube-apiserver-ip-172-31-25-63\" (UID: \"8bc0faba0592823abf7bc23746ee31ae\") " pod="kube-system/kube-apiserver-ip-172-31-25-63" Apr 30 00:45:06.245504 kubelet[2832]: I0430 00:45:06.245053 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bc0faba0592823abf7bc23746ee31ae-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-63\" (UID: \"8bc0faba0592823abf7bc23746ee31ae\") " pod="kube-system/kube-apiserver-ip-172-31-25-63" Apr 30 00:45:06.245504 kubelet[2832]: I0430 00:45:06.245089 2832 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bc0faba0592823abf7bc23746ee31ae-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-63\" (UID: \"8bc0faba0592823abf7bc23746ee31ae\") " pod="kube-system/kube-apiserver-ip-172-31-25-63" Apr 30 00:45:06.246509 systemd[1]: Created slice kubepods-burstable-pod97fb36f7f65f4e7de00813dd03ddf43a.slice - libcontainer container kubepods-burstable-pod97fb36f7f65f4e7de00813dd03ddf43a.slice. Apr 30 00:45:06.251160 kubelet[2832]: E0430 00:45:06.251072 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-63?timeout=10s\": dial tcp 172.31.25.63:6443: connect: connection refused" interval="400ms" Apr 30 00:45:06.251602 kubelet[2832]: E0430 00:45:06.251566 2832 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-63\" not found" node="ip-172-31-25-63" Apr 30 00:45:06.261126 kubelet[2832]: I0430 00:45:06.261048 2832 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-63" Apr 30 00:45:06.261671 kubelet[2832]: E0430 00:45:06.261613 2832 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.63:6443/api/v1/nodes\": dial tcp 172.31.25.63:6443: connect: connection refused" node="ip-172-31-25-63" Apr 30 00:45:06.464325 kubelet[2832]: I0430 00:45:06.463708 2832 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-63" Apr 30 00:45:06.464325 kubelet[2832]: E0430 00:45:06.464238 2832 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.63:6443/api/v1/nodes\": dial tcp 172.31.25.63:6443: connect: connection refused" node="ip-172-31-25-63" Apr 30 00:45:06.532276 containerd[2017]: time="2025-04-30T00:45:06.531818523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-63,Uid:8bc0faba0592823abf7bc23746ee31ae,Namespace:kube-system,Attempt:0,}" Apr 30 00:45:06.544044 containerd[2017]: time="2025-04-30T00:45:06.543588699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-63,Uid:302c97664c7ece75cb98f3177ec7bf63,Namespace:kube-system,Attempt:0,}" Apr 30 00:45:06.553156 containerd[2017]: time="2025-04-30T00:45:06.553057983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-63,Uid:97fb36f7f65f4e7de00813dd03ddf43a,Namespace:kube-system,Attempt:0,}" Apr 30 00:45:06.651687 kubelet[2832]: E0430 00:45:06.651618 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-63?timeout=10s\": dial tcp 172.31.25.63:6443: connect: connection refused" interval="800ms" Apr 30 00:45:06.867559 kubelet[2832]: I0430 00:45:06.867486 2832 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-63" Apr 30 00:45:06.868302 kubelet[2832]: E0430 00:45:06.868207 2832 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.63:6443/api/v1/nodes\": dial tcp 172.31.25.63:6443: connect: connection refused" node="ip-172-31-25-63" Apr 30 00:45:06.950763 kubelet[2832]: W0430 00:45:06.950667 2832 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.63:6443: connect: connection refused Apr 30 00:45:06.951362 kubelet[2832]: E0430 00:45:06.950772 2832 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:45:07.060675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1300603038.mount: Deactivated successfully. Apr 30 00:45:07.078477 containerd[2017]: time="2025-04-30T00:45:07.078400910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:45:07.080545 containerd[2017]: time="2025-04-30T00:45:07.080479394Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:45:07.082466 containerd[2017]: time="2025-04-30T00:45:07.082369910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 30 00:45:07.084376 containerd[2017]: time="2025-04-30T00:45:07.084327878Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:45:07.086459 containerd[2017]: time="2025-04-30T00:45:07.086406374Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:45:07.089446 containerd[2017]: time="2025-04-30T00:45:07.089148602Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:45:07.090838 containerd[2017]: time="2025-04-30T00:45:07.090741062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:45:07.095301 containerd[2017]: time="2025-04-30T00:45:07.095207030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:45:07.099878 containerd[2017]: time="2025-04-30T00:45:07.099203018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.457923ms" Apr 30 00:45:07.103493 containerd[2017]: time="2025-04-30T00:45:07.103434602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.487679ms" Apr 30 00:45:07.110678 containerd[2017]: time="2025-04-30T00:45:07.110390666Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.178927ms" Apr 30 00:45:07.227929 kubelet[2832]: W0430 00:45:07.227738 2832 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-63&limit=500&resourceVersion=0": dial tcp 172.31.25.63:6443: connect: connection refused Apr 30 00:45:07.228690 kubelet[2832]: E0430 00:45:07.228232 2832 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-63&limit=500&resourceVersion=0\": dial tcp 172.31.25.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:45:07.335292 containerd[2017]: time="2025-04-30T00:45:07.334905375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:07.335760 containerd[2017]: time="2025-04-30T00:45:07.335594775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:07.336322 containerd[2017]: time="2025-04-30T00:45:07.336242187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:07.337378 containerd[2017]: time="2025-04-30T00:45:07.336437595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:07.345496 containerd[2017]: time="2025-04-30T00:45:07.345304539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:07.345681 containerd[2017]: time="2025-04-30T00:45:07.345411627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:07.345681 containerd[2017]: time="2025-04-30T00:45:07.345492999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:07.345958 containerd[2017]: time="2025-04-30T00:45:07.345862863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:07.348149 containerd[2017]: time="2025-04-30T00:45:07.347938767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:07.350009 containerd[2017]: time="2025-04-30T00:45:07.348051339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:07.350009 containerd[2017]: time="2025-04-30T00:45:07.349930311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:07.350373 containerd[2017]: time="2025-04-30T00:45:07.350228343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:07.382529 systemd[1]: Started cri-containerd-5d9d9ff4683b6b80f859cbe3bbec31d21c415fff37d1153502b2540ffa44348a.scope - libcontainer container 5d9d9ff4683b6b80f859cbe3bbec31d21c415fff37d1153502b2540ffa44348a. Apr 30 00:45:07.415593 systemd[1]: Started cri-containerd-fe29529214778a74f359f8438b0f0c6529dfad457dc20e172d1136a62b3d2bd0.scope - libcontainer container fe29529214778a74f359f8438b0f0c6529dfad457dc20e172d1136a62b3d2bd0. Apr 30 00:45:07.426610 systemd[1]: Started cri-containerd-82766026e428b9653d879675543c08864ccca3dccc73c675e579408b16fcee88.scope - libcontainer container 82766026e428b9653d879675543c08864ccca3dccc73c675e579408b16fcee88. Apr 30 00:45:07.428517 kubelet[2832]: W0430 00:45:07.426884 2832 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.63:6443: connect: connection refused Apr 30 00:45:07.428517 kubelet[2832]: E0430 00:45:07.426943 2832 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:45:07.453243 kubelet[2832]: E0430 00:45:07.453165 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-63?timeout=10s\": dial tcp 172.31.25.63:6443: connect: connection refused" interval="1.6s" Apr 30 00:45:07.461418 kubelet[2832]: W0430 00:45:07.460807 2832 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.63:6443: connect: connection refused Apr 30 00:45:07.461418 kubelet[2832]: E0430 00:45:07.460903 2832 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.63:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:45:07.518419 containerd[2017]: time="2025-04-30T00:45:07.517551532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-63,Uid:8bc0faba0592823abf7bc23746ee31ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d9d9ff4683b6b80f859cbe3bbec31d21c415fff37d1153502b2540ffa44348a\"" Apr 30 00:45:07.530657 containerd[2017]: time="2025-04-30T00:45:07.530450344Z" level=info msg="CreateContainer within sandbox \"5d9d9ff4683b6b80f859cbe3bbec31d21c415fff37d1153502b2540ffa44348a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:45:07.548637 containerd[2017]: time="2025-04-30T00:45:07.548325052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-63,Uid:97fb36f7f65f4e7de00813dd03ddf43a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe29529214778a74f359f8438b0f0c6529dfad457dc20e172d1136a62b3d2bd0\"" Apr 30 00:45:07.554347 containerd[2017]: time="2025-04-30T00:45:07.553681648Z" level=info msg="CreateContainer within sandbox \"fe29529214778a74f359f8438b0f0c6529dfad457dc20e172d1136a62b3d2bd0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:45:07.561922 containerd[2017]: time="2025-04-30T00:45:07.561857548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-63,Uid:302c97664c7ece75cb98f3177ec7bf63,Namespace:kube-system,Attempt:0,} returns sandbox id \"82766026e428b9653d879675543c08864ccca3dccc73c675e579408b16fcee88\"" Apr 30 00:45:07.568832 containerd[2017]: time="2025-04-30T00:45:07.568654876Z" level=info msg="CreateContainer within sandbox \"5d9d9ff4683b6b80f859cbe3bbec31d21c415fff37d1153502b2540ffa44348a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f8d96ade4b6fbd552fa2b82d1df5a0146838167501df5e438236bd109c82af8f\"" Apr 30 00:45:07.570220 containerd[2017]: time="2025-04-30T00:45:07.569951488Z" level=info msg="StartContainer for \"f8d96ade4b6fbd552fa2b82d1df5a0146838167501df5e438236bd109c82af8f\"" Apr 30 00:45:07.570220 containerd[2017]: time="2025-04-30T00:45:07.570048388Z" level=info msg="CreateContainer within sandbox \"82766026e428b9653d879675543c08864ccca3dccc73c675e579408b16fcee88\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:45:07.606652 containerd[2017]: time="2025-04-30T00:45:07.606328456Z" level=info msg="CreateContainer within sandbox \"fe29529214778a74f359f8438b0f0c6529dfad457dc20e172d1136a62b3d2bd0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a0eaaa463f727e45098ec31cd6b43f35184f09813258b6814a49d20e747be6e0\"" Apr 30 00:45:07.607451 containerd[2017]: time="2025-04-30T00:45:07.607370308Z" level=info msg="StartContainer for \"a0eaaa463f727e45098ec31cd6b43f35184f09813258b6814a49d20e747be6e0\"" Apr 30 00:45:07.614642 containerd[2017]: time="2025-04-30T00:45:07.612776512Z" level=info msg="CreateContainer within sandbox \"82766026e428b9653d879675543c08864ccca3dccc73c675e579408b16fcee88\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f494c4ab1e95d4f0f5f51f9e4a64b1a7f6668b2b53f3dd20ca6f24e19325fc5a\"" Apr 30 00:45:07.618450 containerd[2017]: time="2025-04-30T00:45:07.615306520Z" level=info msg="StartContainer for \"f494c4ab1e95d4f0f5f51f9e4a64b1a7f6668b2b53f3dd20ca6f24e19325fc5a\"" Apr 30 00:45:07.624436 systemd[1]: Started cri-containerd-f8d96ade4b6fbd552fa2b82d1df5a0146838167501df5e438236bd109c82af8f.scope - libcontainer container f8d96ade4b6fbd552fa2b82d1df5a0146838167501df5e438236bd109c82af8f. Apr 30 00:45:07.674779 kubelet[2832]: I0430 00:45:07.674346 2832 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-63" Apr 30 00:45:07.674938 kubelet[2832]: E0430 00:45:07.674885 2832 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.63:6443/api/v1/nodes\": dial tcp 172.31.25.63:6443: connect: connection refused" node="ip-172-31-25-63" Apr 30 00:45:07.688441 systemd[1]: Started cri-containerd-a0eaaa463f727e45098ec31cd6b43f35184f09813258b6814a49d20e747be6e0.scope - libcontainer container a0eaaa463f727e45098ec31cd6b43f35184f09813258b6814a49d20e747be6e0. Apr 30 00:45:07.708632 systemd[1]: Started cri-containerd-f494c4ab1e95d4f0f5f51f9e4a64b1a7f6668b2b53f3dd20ca6f24e19325fc5a.scope - libcontainer container f494c4ab1e95d4f0f5f51f9e4a64b1a7f6668b2b53f3dd20ca6f24e19325fc5a. Apr 30 00:45:07.765552 containerd[2017]: time="2025-04-30T00:45:07.763630205Z" level=info msg="StartContainer for \"f8d96ade4b6fbd552fa2b82d1df5a0146838167501df5e438236bd109c82af8f\" returns successfully" Apr 30 00:45:07.844710 containerd[2017]: time="2025-04-30T00:45:07.844349910Z" level=info msg="StartContainer for \"f494c4ab1e95d4f0f5f51f9e4a64b1a7f6668b2b53f3dd20ca6f24e19325fc5a\" returns successfully" Apr 30 00:45:07.864698 containerd[2017]: time="2025-04-30T00:45:07.864519630Z" level=info msg="StartContainer for \"a0eaaa463f727e45098ec31cd6b43f35184f09813258b6814a49d20e747be6e0\" returns successfully" Apr 30 00:45:08.124696 kubelet[2832]: E0430 00:45:08.122074 2832 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-63\" not found" node="ip-172-31-25-63" Apr 30 00:45:08.136002 kubelet[2832]: E0430 00:45:08.135540 2832 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-63\" not found" node="ip-172-31-25-63" Apr 30 00:45:08.139844 kubelet[2832]: E0430 00:45:08.139810 2832 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-63\" not found" node="ip-172-31-25-63" Apr 30 00:45:09.149574 kubelet[2832]: E0430 00:45:09.149357 2832 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-63\" not found" node="ip-172-31-25-63" Apr 30 00:45:09.151579 kubelet[2832]: E0430 00:45:09.149762 2832 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-63\" not found" node="ip-172-31-25-63" Apr 30 00:45:09.277851 kubelet[2832]: I0430 00:45:09.277649 2832 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-63" Apr 30 00:45:10.723857 kubelet[2832]: E0430 00:45:10.723771 2832 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-63\" not found" node="ip-172-31-25-63" Apr 30 00:45:11.806210 kubelet[2832]: E0430 00:45:11.806053 2832 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-63\" not found" node="ip-172-31-25-63" Apr 30 00:45:11.959352 kubelet[2832]: I0430 00:45:11.958968 2832 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-25-63" Apr 30 00:45:12.018049 kubelet[2832]: I0430 00:45:12.018003 2832 apiserver.go:52] "Watching apiserver" Apr 30 00:45:12.042870 kubelet[2832]: I0430 00:45:12.042793 2832 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:45:12.046771 kubelet[2832]: I0430 00:45:12.046474 2832 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-63" Apr 30 00:45:12.094696 kubelet[2832]: E0430 00:45:12.094260 2832 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-63\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-25-63" Apr 30 00:45:12.094696 kubelet[2832]: I0430 00:45:12.094304 2832 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:12.101438 kubelet[2832]: E0430 00:45:12.101262 2832 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-63\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:12.101438 kubelet[2832]: I0430 00:45:12.101362 2832 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-63" Apr 30 00:45:12.108534 kubelet[2832]: E0430 00:45:12.108453 2832 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-63\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-25-63" Apr 30 00:45:13.276056 kubelet[2832]: I0430 00:45:13.275257 2832 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-63" Apr 30 00:45:13.964404 systemd[1]: Reloading requested from client PID 3113 ('systemctl') (unit session-7.scope)... Apr 30 00:45:13.964983 systemd[1]: Reloading... Apr 30 00:45:14.144155 zram_generator::config[3162]: No configuration found. Apr 30 00:45:14.372900 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:45:14.578186 systemd[1]: Reloading finished in 612 ms. Apr 30 00:45:14.669926 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:45:14.685236 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:45:14.685958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:45:14.686029 systemd[1]: kubelet.service: Consumed 2.223s CPU time, 125.6M memory peak, 0B memory swap peak. Apr 30 00:45:14.694674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:45:15.017390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:45:15.030799 (kubelet)[3213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:45:15.138091 kubelet[3213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:45:15.141187 kubelet[3213]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:45:15.141187 kubelet[3213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:45:15.141187 kubelet[3213]: I0430 00:45:15.139414 3213 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:45:15.152391 kubelet[3213]: I0430 00:45:15.152037 3213 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:45:15.152391 kubelet[3213]: I0430 00:45:15.152158 3213 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:45:15.152881 kubelet[3213]: I0430 00:45:15.152682 3213 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:45:15.160151 kubelet[3213]: I0430 00:45:15.158915 3213 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:45:15.166915 kubelet[3213]: I0430 00:45:15.166873 3213 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:45:15.174509 kubelet[3213]: E0430 00:45:15.174460 3213 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:45:15.174728 kubelet[3213]: I0430 00:45:15.174705 3213 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:45:15.180096 kubelet[3213]: I0430 00:45:15.180048 3213 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:45:15.180858 kubelet[3213]: I0430 00:45:15.180805 3213 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:45:15.181285 kubelet[3213]: I0430 00:45:15.180964 3213 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-63","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:45:15.181598 kubelet[3213]: I0430 00:45:15.181576 3213 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:45:15.181697 kubelet[3213]: I0430 00:45:15.181680 3213 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:45:15.181921 kubelet[3213]: I0430 00:45:15.181900 3213 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:45:15.182434 kubelet[3213]: I0430 00:45:15.182406 3213 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:45:15.182575 kubelet[3213]: I0430 00:45:15.182556 3213 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:45:15.182708 kubelet[3213]: I0430 00:45:15.182687 3213 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:45:15.182809 kubelet[3213]: I0430 00:45:15.182791 3213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:45:15.189136 kubelet[3213]: I0430 00:45:15.187692 3213 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:45:15.194006 kubelet[3213]: I0430 00:45:15.190426 3213 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:45:15.194006 kubelet[3213]: I0430 00:45:15.193770 3213 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:45:15.194006 kubelet[3213]: I0430 00:45:15.193836 3213 server.go:1287] "Started kubelet" Apr 30 00:45:15.202101 kubelet[3213]: I0430 00:45:15.202041 3213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:45:15.211495 kubelet[3213]: I0430 00:45:15.211420 3213 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:45:15.218324 kubelet[3213]: I0430 00:45:15.218184 3213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:45:15.226137 kubelet[3213]: I0430 00:45:15.221483 3213 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:45:15.229133 kubelet[3213]: I0430 00:45:15.227203 3213 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:45:15.245279 kubelet[3213]: I0430 00:45:15.221868 3213 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:45:15.245490 kubelet[3213]: E0430 00:45:15.222024 3213 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-25-63\" not found" Apr 30 00:45:15.246589 kubelet[3213]: I0430 00:45:15.221851 3213 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:45:15.246785 kubelet[3213]: I0430 00:45:15.231036 3213 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:45:15.248141 kubelet[3213]: I0430 00:45:15.247613 3213 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:45:15.283262 kubelet[3213]: I0430 00:45:15.282523 3213 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:45:15.284130 kubelet[3213]: I0430 00:45:15.283611 3213 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:45:15.299192 kubelet[3213]: E0430 00:45:15.299080 3213 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:45:15.319795 kubelet[3213]: I0430 00:45:15.319435 3213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:45:15.329423 kubelet[3213]: I0430 00:45:15.329331 3213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:45:15.331809 kubelet[3213]: I0430 00:45:15.330194 3213 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:45:15.331809 kubelet[3213]: I0430 00:45:15.330246 3213 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:45:15.331809 kubelet[3213]: I0430 00:45:15.330262 3213 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:45:15.331809 kubelet[3213]: E0430 00:45:15.330333 3213 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:45:15.332228 kubelet[3213]: I0430 00:45:15.332101 3213 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:45:15.430456 kubelet[3213]: E0430 00:45:15.430394 3213 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:45:15.431589 kubelet[3213]: I0430 00:45:15.431352 3213 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:45:15.431589 kubelet[3213]: I0430 00:45:15.431379 3213 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:45:15.431589 kubelet[3213]: I0430 00:45:15.431412 3213 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:45:15.432588 kubelet[3213]: I0430 00:45:15.432260 3213 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:45:15.432588 kubelet[3213]: I0430 00:45:15.432291 3213 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:45:15.432588 kubelet[3213]: I0430 00:45:15.432325 3213 policy_none.go:49] "None policy: Start" Apr 30 00:45:15.432588 kubelet[3213]: I0430 00:45:15.432344 3213 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:45:15.432588 kubelet[3213]: I0430 00:45:15.432367 3213 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:45:15.434996 kubelet[3213]: I0430 00:45:15.434281 3213 state_mem.go:75] "Updated machine memory state" Apr 30 00:45:15.457947 kubelet[3213]: I0430 00:45:15.457911 3213 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:45:15.459157 kubelet[3213]: I0430 00:45:15.459096 3213 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:45:15.459372 kubelet[3213]: I0430 00:45:15.459320 3213 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:45:15.461771 kubelet[3213]: I0430 00:45:15.460916 3213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:45:15.465584 kubelet[3213]: E0430 00:45:15.463993 3213 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:45:15.580486 kubelet[3213]: I0430 00:45:15.580274 3213 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-63" Apr 30 00:45:15.598308 kubelet[3213]: I0430 00:45:15.598069 3213 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-25-63" Apr 30 00:45:15.598308 kubelet[3213]: I0430 00:45:15.598234 3213 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-25-63" Apr 30 00:45:15.634054 kubelet[3213]: I0430 00:45:15.632054 3213 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-63" Apr 30 00:45:15.634054 kubelet[3213]: I0430 00:45:15.632615 3213 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-63" Apr 30 00:45:15.634054 kubelet[3213]: I0430 00:45:15.633224 3213 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:15.648825 kubelet[3213]: E0430 00:45:15.648275 3213 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-63\" already exists" pod="kube-system/kube-scheduler-ip-172-31-25-63" Apr 30 00:45:15.655678 kubelet[3213]: I0430 00:45:15.655624 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bc0faba0592823abf7bc23746ee31ae-ca-certs\") pod \"kube-apiserver-ip-172-31-25-63\" (UID: \"8bc0faba0592823abf7bc23746ee31ae\") " pod="kube-system/kube-apiserver-ip-172-31-25-63" Apr 30 00:45:15.656943 kubelet[3213]: I0430 00:45:15.656664 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/302c97664c7ece75cb98f3177ec7bf63-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-63\" (UID: \"302c97664c7ece75cb98f3177ec7bf63\") " pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:15.656943 kubelet[3213]: I0430 00:45:15.656800 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/302c97664c7ece75cb98f3177ec7bf63-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-63\" (UID: \"302c97664c7ece75cb98f3177ec7bf63\") " pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:15.656943 kubelet[3213]: I0430 00:45:15.656882 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/302c97664c7ece75cb98f3177ec7bf63-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-63\" (UID: \"302c97664c7ece75cb98f3177ec7bf63\") " pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:15.659054 kubelet[3213]: I0430 00:45:15.658746 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/302c97664c7ece75cb98f3177ec7bf63-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-63\" (UID: \"302c97664c7ece75cb98f3177ec7bf63\") " pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:15.659054 kubelet[3213]: I0430 00:45:15.658828 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97fb36f7f65f4e7de00813dd03ddf43a-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-63\" (UID: \"97fb36f7f65f4e7de00813dd03ddf43a\") " pod="kube-system/kube-scheduler-ip-172-31-25-63" Apr 30 00:45:15.659054 kubelet[3213]: I0430 00:45:15.658868 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bc0faba0592823abf7bc23746ee31ae-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-63\" (UID: \"8bc0faba0592823abf7bc23746ee31ae\") " pod="kube-system/kube-apiserver-ip-172-31-25-63" Apr 30 00:45:15.659054 kubelet[3213]: I0430 00:45:15.658910 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bc0faba0592823abf7bc23746ee31ae-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-63\" (UID: \"8bc0faba0592823abf7bc23746ee31ae\") " pod="kube-system/kube-apiserver-ip-172-31-25-63" Apr 30 00:45:15.659054 kubelet[3213]: I0430 00:45:15.658953 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/302c97664c7ece75cb98f3177ec7bf63-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-63\" (UID: \"302c97664c7ece75cb98f3177ec7bf63\") " pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:16.200262 kubelet[3213]: I0430 00:45:16.200189 3213 apiserver.go:52] "Watching apiserver" Apr 30 00:45:16.245944 kubelet[3213]: I0430 00:45:16.245614 3213 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:45:16.377677 kubelet[3213]: I0430 00:45:16.377616 3213 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-63" Apr 30 00:45:16.379832 kubelet[3213]: I0430 00:45:16.378486 3213 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-63" Apr 30 00:45:16.379832 kubelet[3213]: I0430 00:45:16.378539 3213 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:16.435773 kubelet[3213]: E0430 00:45:16.435457 3213 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-63\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-63" Apr 30 00:45:16.458039 kubelet[3213]: E0430 00:45:16.457502 3213 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-63\" already exists" pod="kube-system/kube-scheduler-ip-172-31-25-63" Apr 30 00:45:16.458667 kubelet[3213]: E0430 00:45:16.458626 3213 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-63\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-25-63" Apr 30 00:45:16.569039 kubelet[3213]: I0430 00:45:16.568942 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-63" podStartSLOduration=1.568919401 podStartE2EDuration="1.568919401s" podCreationTimestamp="2025-04-30 00:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:45:16.527351545 +0000 UTC m=+1.486889073" watchObservedRunningTime="2025-04-30 00:45:16.568919401 +0000 UTC m=+1.528456917" Apr 30 00:45:16.627044 kubelet[3213]: I0430 00:45:16.626952 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-63" podStartSLOduration=3.626928445 podStartE2EDuration="3.626928445s" podCreationTimestamp="2025-04-30 00:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:45:16.582810109 +0000 UTC m=+1.542347637" watchObservedRunningTime="2025-04-30 00:45:16.626928445 +0000 UTC m=+1.586465949" Apr 30 00:45:16.684050 kubelet[3213]: I0430 00:45:16.683955 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-63" podStartSLOduration=1.6839343850000001 podStartE2EDuration="1.683934385s" podCreationTimestamp="2025-04-30 00:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:45:16.627786325 +0000 UTC m=+1.587323841" watchObservedRunningTime="2025-04-30 00:45:16.683934385 +0000 UTC m=+1.643471889" Apr 30 00:45:19.766055 update_engine[1996]: I20250430 00:45:19.765959 1996 update_attempter.cc:509] Updating boot flags... Apr 30 00:45:19.924420 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3292) Apr 30 00:45:20.361161 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3295) Apr 30 00:45:20.571770 sudo[2333]: pam_unix(sudo:session): session closed for user root Apr 30 00:45:20.613398 sshd[2330]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:20.626389 systemd[1]: sshd@6-172.31.25.63:22-147.75.109.163:43394.service: Deactivated successfully. Apr 30 00:45:20.630095 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:45:20.633970 systemd[1]: session-7.scope: Consumed 10.632s CPU time, 149.0M memory peak, 0B memory swap peak. Apr 30 00:45:20.653313 systemd-logind[1995]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:45:20.684067 systemd-logind[1995]: Removed session 7. Apr 30 00:45:20.863183 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3295) Apr 30 00:45:20.946844 kubelet[3213]: I0430 00:45:20.944628 3213 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:45:20.946844 kubelet[3213]: I0430 00:45:20.946503 3213 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:45:20.947519 containerd[2017]: time="2025-04-30T00:45:20.945417787Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:45:21.833810 systemd[1]: Created slice kubepods-besteffort-pod90bf0b36_2590_4e9e_8e28_8c9935183215.slice - libcontainer container kubepods-besteffort-pod90bf0b36_2590_4e9e_8e28_8c9935183215.slice. Apr 30 00:45:21.910163 kubelet[3213]: I0430 00:45:21.909730 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/90bf0b36-2590-4e9e-8e28-8c9935183215-kube-proxy\") pod \"kube-proxy-x5v8q\" (UID: \"90bf0b36-2590-4e9e-8e28-8c9935183215\") " pod="kube-system/kube-proxy-x5v8q" Apr 30 00:45:21.910163 kubelet[3213]: I0430 00:45:21.909809 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcrd6\" (UniqueName: \"kubernetes.io/projected/90bf0b36-2590-4e9e-8e28-8c9935183215-kube-api-access-zcrd6\") pod \"kube-proxy-x5v8q\" (UID: \"90bf0b36-2590-4e9e-8e28-8c9935183215\") " pod="kube-system/kube-proxy-x5v8q" Apr 30 00:45:21.910163 kubelet[3213]: I0430 00:45:21.909858 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90bf0b36-2590-4e9e-8e28-8c9935183215-xtables-lock\") pod \"kube-proxy-x5v8q\" (UID: \"90bf0b36-2590-4e9e-8e28-8c9935183215\") " pod="kube-system/kube-proxy-x5v8q" Apr 30 00:45:21.910163 kubelet[3213]: I0430 00:45:21.909898 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90bf0b36-2590-4e9e-8e28-8c9935183215-lib-modules\") pod \"kube-proxy-x5v8q\" (UID: \"90bf0b36-2590-4e9e-8e28-8c9935183215\") " pod="kube-system/kube-proxy-x5v8q" Apr 30 00:45:21.992322 systemd[1]: Created slice kubepods-besteffort-pod7bcd407c_419c_44c6_b738_4bb00ff30734.slice - libcontainer container kubepods-besteffort-pod7bcd407c_419c_44c6_b738_4bb00ff30734.slice. Apr 30 00:45:22.012170 kubelet[3213]: I0430 00:45:22.012078 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g7cx\" (UniqueName: \"kubernetes.io/projected/7bcd407c-419c-44c6-b738-4bb00ff30734-kube-api-access-5g7cx\") pod \"tigera-operator-789496d6f5-9kpvc\" (UID: \"7bcd407c-419c-44c6-b738-4bb00ff30734\") " pod="tigera-operator/tigera-operator-789496d6f5-9kpvc" Apr 30 00:45:22.015198 kubelet[3213]: I0430 00:45:22.014433 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7bcd407c-419c-44c6-b738-4bb00ff30734-var-lib-calico\") pod \"tigera-operator-789496d6f5-9kpvc\" (UID: \"7bcd407c-419c-44c6-b738-4bb00ff30734\") " pod="tigera-operator/tigera-operator-789496d6f5-9kpvc" Apr 30 00:45:22.149492 containerd[2017]: time="2025-04-30T00:45:22.148851857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x5v8q,Uid:90bf0b36-2590-4e9e-8e28-8c9935183215,Namespace:kube-system,Attempt:0,}" Apr 30 00:45:22.200002 containerd[2017]: time="2025-04-30T00:45:22.199476929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:22.200002 containerd[2017]: time="2025-04-30T00:45:22.199562249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:22.200002 containerd[2017]: time="2025-04-30T00:45:22.199587389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:22.200002 containerd[2017]: time="2025-04-30T00:45:22.199737209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:22.244468 systemd[1]: Started cri-containerd-8629160c32d9f36dae6501c8aad4f66a93b3b7f4c2df50747d579ffcdd4e3a04.scope - libcontainer container 8629160c32d9f36dae6501c8aad4f66a93b3b7f4c2df50747d579ffcdd4e3a04. Apr 30 00:45:22.293060 containerd[2017]: time="2025-04-30T00:45:22.292988537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x5v8q,Uid:90bf0b36-2590-4e9e-8e28-8c9935183215,Namespace:kube-system,Attempt:0,} returns sandbox id \"8629160c32d9f36dae6501c8aad4f66a93b3b7f4c2df50747d579ffcdd4e3a04\"" Apr 30 00:45:22.301836 containerd[2017]: time="2025-04-30T00:45:22.301602029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-9kpvc,Uid:7bcd407c-419c-44c6-b738-4bb00ff30734,Namespace:tigera-operator,Attempt:0,}" Apr 30 00:45:22.304686 containerd[2017]: time="2025-04-30T00:45:22.304345457Z" level=info msg="CreateContainer within sandbox \"8629160c32d9f36dae6501c8aad4f66a93b3b7f4c2df50747d579ffcdd4e3a04\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:45:22.360614 containerd[2017]: time="2025-04-30T00:45:22.360308970Z" level=info msg="CreateContainer within sandbox \"8629160c32d9f36dae6501c8aad4f66a93b3b7f4c2df50747d579ffcdd4e3a04\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1bf29eccd73f3ac86262da90e8c36303f9be9db9107bcf92b2b1b00fade64715\"" Apr 30 00:45:22.362916 containerd[2017]: time="2025-04-30T00:45:22.362716626Z" level=info msg="StartContainer for \"1bf29eccd73f3ac86262da90e8c36303f9be9db9107bcf92b2b1b00fade64715\"" Apr 30 00:45:22.366510 containerd[2017]: time="2025-04-30T00:45:22.364841778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:22.366510 containerd[2017]: time="2025-04-30T00:45:22.364926606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:22.366510 containerd[2017]: time="2025-04-30T00:45:22.364951650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:22.366510 containerd[2017]: time="2025-04-30T00:45:22.365324226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:22.438493 systemd[1]: Started cri-containerd-91157bb16cf9f4419ccd3d7ac1454a06f7ef9e32bd56570b3ef80075b08111e7.scope - libcontainer container 91157bb16cf9f4419ccd3d7ac1454a06f7ef9e32bd56570b3ef80075b08111e7. Apr 30 00:45:22.472497 systemd[1]: Started cri-containerd-1bf29eccd73f3ac86262da90e8c36303f9be9db9107bcf92b2b1b00fade64715.scope - libcontainer container 1bf29eccd73f3ac86262da90e8c36303f9be9db9107bcf92b2b1b00fade64715. Apr 30 00:45:22.548827 containerd[2017]: time="2025-04-30T00:45:22.548754319Z" level=info msg="StartContainer for \"1bf29eccd73f3ac86262da90e8c36303f9be9db9107bcf92b2b1b00fade64715\" returns successfully" Apr 30 00:45:22.586614 containerd[2017]: time="2025-04-30T00:45:22.586480795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-9kpvc,Uid:7bcd407c-419c-44c6-b738-4bb00ff30734,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"91157bb16cf9f4419ccd3d7ac1454a06f7ef9e32bd56570b3ef80075b08111e7\"" Apr 30 00:45:22.594333 containerd[2017]: time="2025-04-30T00:45:22.594260227Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 00:45:23.435264 kubelet[3213]: I0430 00:45:23.434727 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x5v8q" podStartSLOduration=2.434704315 podStartE2EDuration="2.434704315s" podCreationTimestamp="2025-04-30 00:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:45:23.434394727 +0000 UTC m=+8.393932267" watchObservedRunningTime="2025-04-30 00:45:23.434704315 +0000 UTC m=+8.394241831" Apr 30 00:45:26.097796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183504738.mount: Deactivated successfully. Apr 30 00:45:26.787653 containerd[2017]: time="2025-04-30T00:45:26.787587300Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:26.789298 containerd[2017]: time="2025-04-30T00:45:26.789245052Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" Apr 30 00:45:26.790346 containerd[2017]: time="2025-04-30T00:45:26.790291476Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:26.794596 containerd[2017]: time="2025-04-30T00:45:26.794514480Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:26.796551 containerd[2017]: time="2025-04-30T00:45:26.796355952Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 4.202033817s" Apr 30 00:45:26.796551 containerd[2017]: time="2025-04-30T00:45:26.796409340Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" Apr 30 00:45:26.803921 containerd[2017]: time="2025-04-30T00:45:26.803858472Z" level=info msg="CreateContainer within sandbox \"91157bb16cf9f4419ccd3d7ac1454a06f7ef9e32bd56570b3ef80075b08111e7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 00:45:26.822655 containerd[2017]: time="2025-04-30T00:45:26.822467940Z" level=info msg="CreateContainer within sandbox \"91157bb16cf9f4419ccd3d7ac1454a06f7ef9e32bd56570b3ef80075b08111e7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1ca3e46c80b18089779a268188544de286bfb8b5eac893898cb70e19365b578f\"" Apr 30 00:45:26.824932 containerd[2017]: time="2025-04-30T00:45:26.823290084Z" level=info msg="StartContainer for \"1ca3e46c80b18089779a268188544de286bfb8b5eac893898cb70e19365b578f\"" Apr 30 00:45:26.884442 systemd[1]: Started cri-containerd-1ca3e46c80b18089779a268188544de286bfb8b5eac893898cb70e19365b578f.scope - libcontainer container 1ca3e46c80b18089779a268188544de286bfb8b5eac893898cb70e19365b578f. Apr 30 00:45:26.929275 containerd[2017]: time="2025-04-30T00:45:26.929198424Z" level=info msg="StartContainer for \"1ca3e46c80b18089779a268188544de286bfb8b5eac893898cb70e19365b578f\" returns successfully" Apr 30 00:45:32.993150 kubelet[3213]: I0430 00:45:32.991947 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-9kpvc" podStartSLOduration=7.783751849 podStartE2EDuration="11.99192435s" podCreationTimestamp="2025-04-30 00:45:21 +0000 UTC" firstStartedPulling="2025-04-30 00:45:22.590099551 +0000 UTC m=+7.549637043" lastFinishedPulling="2025-04-30 00:45:26.79827204 +0000 UTC m=+11.757809544" observedRunningTime="2025-04-30 00:45:27.448391099 +0000 UTC m=+12.407928639" watchObservedRunningTime="2025-04-30 00:45:32.99192435 +0000 UTC m=+17.951461854" Apr 30 00:45:33.011174 systemd[1]: Created slice kubepods-besteffort-pod3e1c4424_f5a8_4b2e_861e_9722c7faec18.slice - libcontainer container kubepods-besteffort-pod3e1c4424_f5a8_4b2e_861e_9722c7faec18.slice. Apr 30 00:45:33.084709 kubelet[3213]: I0430 00:45:33.084484 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3e1c4424-f5a8-4b2e-861e-9722c7faec18-typha-certs\") pod \"calico-typha-75f8c8b4b5-hwz72\" (UID: \"3e1c4424-f5a8-4b2e-861e-9722c7faec18\") " pod="calico-system/calico-typha-75f8c8b4b5-hwz72" Apr 30 00:45:33.084709 kubelet[3213]: I0430 00:45:33.084564 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e1c4424-f5a8-4b2e-861e-9722c7faec18-tigera-ca-bundle\") pod \"calico-typha-75f8c8b4b5-hwz72\" (UID: \"3e1c4424-f5a8-4b2e-861e-9722c7faec18\") " pod="calico-system/calico-typha-75f8c8b4b5-hwz72" Apr 30 00:45:33.084709 kubelet[3213]: I0430 00:45:33.084604 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vlxg\" (UniqueName: \"kubernetes.io/projected/3e1c4424-f5a8-4b2e-861e-9722c7faec18-kube-api-access-6vlxg\") pod \"calico-typha-75f8c8b4b5-hwz72\" (UID: \"3e1c4424-f5a8-4b2e-861e-9722c7faec18\") " pod="calico-system/calico-typha-75f8c8b4b5-hwz72" Apr 30 00:45:33.260233 systemd[1]: Created slice kubepods-besteffort-pod87e01877_d36d_44f2_b180_9cf12c32320f.slice - libcontainer container kubepods-besteffort-pod87e01877_d36d_44f2_b180_9cf12c32320f.slice. Apr 30 00:45:33.285373 kubelet[3213]: I0430 00:45:33.285288 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/87e01877-d36d-44f2-b180-9cf12c32320f-cni-bin-dir\") pod \"calico-node-b2wnc\" (UID: \"87e01877-d36d-44f2-b180-9cf12c32320f\") " pod="calico-system/calico-node-b2wnc" Apr 30 00:45:33.285373 kubelet[3213]: I0430 00:45:33.285374 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/87e01877-d36d-44f2-b180-9cf12c32320f-var-lib-calico\") pod \"calico-node-b2wnc\" (UID: \"87e01877-d36d-44f2-b180-9cf12c32320f\") " pod="calico-system/calico-node-b2wnc" Apr 30 00:45:33.285668 kubelet[3213]: I0430 00:45:33.285418 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/87e01877-d36d-44f2-b180-9cf12c32320f-var-run-calico\") pod \"calico-node-b2wnc\" (UID: \"87e01877-d36d-44f2-b180-9cf12c32320f\") " pod="calico-system/calico-node-b2wnc" Apr 30 00:45:33.285668 kubelet[3213]: I0430 00:45:33.285467 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2lx\" (UniqueName: \"kubernetes.io/projected/87e01877-d36d-44f2-b180-9cf12c32320f-kube-api-access-lx2lx\") pod \"calico-node-b2wnc\" (UID: \"87e01877-d36d-44f2-b180-9cf12c32320f\") " pod="calico-system/calico-node-b2wnc" Apr 30 00:45:33.285668 kubelet[3213]: I0430 00:45:33.285520 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e01877-d36d-44f2-b180-9cf12c32320f-lib-modules\") pod \"calico-node-b2wnc\" (UID: \"87e01877-d36d-44f2-b180-9cf12c32320f\") " pod="calico-system/calico-node-b2wnc" Apr 30 00:45:33.285668 kubelet[3213]: I0430 00:45:33.285560 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87e01877-d36d-44f2-b180-9cf12c32320f-xtables-lock\") pod \"calico-node-b2wnc\" (UID: \"87e01877-d36d-44f2-b180-9cf12c32320f\") " pod="calico-system/calico-node-b2wnc" Apr 30 00:45:33.285668 kubelet[3213]: I0430 00:45:33.285596 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/87e01877-d36d-44f2-b180-9cf12c32320f-cni-log-dir\") pod \"calico-node-b2wnc\" (UID: \"87e01877-d36d-44f2-b180-9cf12c32320f\") " pod="calico-system/calico-node-b2wnc" Apr 30 00:45:33.285952 kubelet[3213]: I0430 00:45:33.285636 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e01877-d36d-44f2-b180-9cf12c32320f-tigera-ca-bundle\") pod \"calico-node-b2wnc\" (UID: \"87e01877-d36d-44f2-b180-9cf12c32320f\") " pod="calico-system/calico-node-b2wnc" Apr 30 00:45:33.285952 kubelet[3213]: I0430 00:45:33.285675 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/87e01877-d36d-44f2-b180-9cf12c32320f-node-certs\") pod \"calico-node-b2wnc\" (UID: \"87e01877-d36d-44f2-b180-9cf12c32320f\") " pod="calico-system/calico-node-b2wnc" Apr 30 00:45:33.285952 kubelet[3213]: I0430 00:45:33.285711 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/87e01877-d36d-44f2-b180-9cf12c32320f-cni-net-dir\") pod \"calico-node-b2wnc\" (UID: \"87e01877-d36d-44f2-b180-9cf12c32320f\") " pod="calico-system/calico-node-b2wnc" Apr 30 00:45:33.285952 kubelet[3213]: I0430 00:45:33.285750 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/87e01877-d36d-44f2-b180-9cf12c32320f-policysync\") pod \"calico-node-b2wnc\" (UID: \"87e01877-d36d-44f2-b180-9cf12c32320f\") " pod="calico-system/calico-node-b2wnc" Apr 30 00:45:33.285952 kubelet[3213]: I0430 00:45:33.285786 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/87e01877-d36d-44f2-b180-9cf12c32320f-flexvol-driver-host\") pod \"calico-node-b2wnc\" (UID: \"87e01877-d36d-44f2-b180-9cf12c32320f\") " pod="calico-system/calico-node-b2wnc" Apr 30 00:45:33.320635 containerd[2017]: time="2025-04-30T00:45:33.320582260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75f8c8b4b5-hwz72,Uid:3e1c4424-f5a8-4b2e-861e-9722c7faec18,Namespace:calico-system,Attempt:0,}" Apr 30 00:45:33.385418 containerd[2017]: time="2025-04-30T00:45:33.384940600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:33.385418 containerd[2017]: time="2025-04-30T00:45:33.385071148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:33.385418 containerd[2017]: time="2025-04-30T00:45:33.385152100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:33.385418 containerd[2017]: time="2025-04-30T00:45:33.385330876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:33.403788 kubelet[3213]: E0430 00:45:33.401193 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.403788 kubelet[3213]: W0430 00:45:33.401245 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.403788 kubelet[3213]: E0430 00:45:33.401301 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.428484 kubelet[3213]: E0430 00:45:33.427909 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.428484 kubelet[3213]: W0430 00:45:33.427965 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.428484 kubelet[3213]: E0430 00:45:33.428012 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.467500 systemd[1]: Started cri-containerd-daf4fa6720542569b8039d9b53f49978b5d74615d730a8027179ae3f3cb9099d.scope - libcontainer container daf4fa6720542569b8039d9b53f49978b5d74615d730a8027179ae3f3cb9099d. Apr 30 00:45:33.473413 kubelet[3213]: E0430 00:45:33.473322 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.473413 kubelet[3213]: W0430 00:45:33.473398 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.473629 kubelet[3213]: E0430 00:45:33.473464 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.495340 kubelet[3213]: E0430 00:45:33.495136 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8cg7x" podUID="eb16121b-e738-4b90-8cde-fe4b2beb11f1" Apr 30 00:45:33.571673 containerd[2017]: time="2025-04-30T00:45:33.570166145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b2wnc,Uid:87e01877-d36d-44f2-b180-9cf12c32320f,Namespace:calico-system,Attempt:0,}" Apr 30 00:45:33.576202 kubelet[3213]: E0430 00:45:33.576078 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.576202 kubelet[3213]: W0430 00:45:33.576130 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.576202 kubelet[3213]: E0430 00:45:33.576177 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.579245 kubelet[3213]: E0430 00:45:33.578060 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.579245 kubelet[3213]: W0430 00:45:33.578092 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.579245 kubelet[3213]: E0430 00:45:33.578197 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.580605 kubelet[3213]: E0430 00:45:33.580551 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.580605 kubelet[3213]: W0430 00:45:33.580590 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.580831 kubelet[3213]: E0430 00:45:33.580629 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.582040 kubelet[3213]: E0430 00:45:33.581989 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.582040 kubelet[3213]: W0430 00:45:33.582026 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.582335 kubelet[3213]: E0430 00:45:33.582063 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.585609 kubelet[3213]: E0430 00:45:33.585465 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.585609 kubelet[3213]: W0430 00:45:33.585532 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.585609 kubelet[3213]: E0430 00:45:33.585570 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.587535 kubelet[3213]: E0430 00:45:33.586317 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.587535 kubelet[3213]: W0430 00:45:33.586342 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.587535 kubelet[3213]: E0430 00:45:33.586372 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.587535 kubelet[3213]: E0430 00:45:33.587351 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.589571 kubelet[3213]: W0430 00:45:33.587381 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.589571 kubelet[3213]: E0430 00:45:33.589280 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.591473 kubelet[3213]: E0430 00:45:33.591304 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.591473 kubelet[3213]: W0430 00:45:33.591341 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.591473 kubelet[3213]: E0430 00:45:33.591398 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.593466 kubelet[3213]: E0430 00:45:33.593405 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.593466 kubelet[3213]: W0430 00:45:33.593450 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.593692 kubelet[3213]: E0430 00:45:33.593498 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.593692 kubelet[3213]: I0430 00:45:33.593544 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eb16121b-e738-4b90-8cde-fe4b2beb11f1-varrun\") pod \"csi-node-driver-8cg7x\" (UID: \"eb16121b-e738-4b90-8cde-fe4b2beb11f1\") " pod="calico-system/csi-node-driver-8cg7x" Apr 30 00:45:33.595182 kubelet[3213]: E0430 00:45:33.593962 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.595182 kubelet[3213]: W0430 00:45:33.593997 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.595182 kubelet[3213]: E0430 00:45:33.594025 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.595182 kubelet[3213]: I0430 00:45:33.594061 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb16121b-e738-4b90-8cde-fe4b2beb11f1-kubelet-dir\") pod \"csi-node-driver-8cg7x\" (UID: \"eb16121b-e738-4b90-8cde-fe4b2beb11f1\") " pod="calico-system/csi-node-driver-8cg7x" Apr 30 00:45:33.595182 kubelet[3213]: E0430 00:45:33.595045 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.595182 kubelet[3213]: W0430 00:45:33.595075 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.595182 kubelet[3213]: E0430 00:45:33.595120 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.596032 kubelet[3213]: E0430 00:45:33.595986 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.596032 kubelet[3213]: W0430 00:45:33.596022 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.596392 kubelet[3213]: E0430 00:45:33.596338 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.597454 kubelet[3213]: E0430 00:45:33.597404 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.597454 kubelet[3213]: W0430 00:45:33.597441 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.597659 kubelet[3213]: E0430 00:45:33.597490 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.598369 kubelet[3213]: E0430 00:45:33.598311 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.598369 kubelet[3213]: W0430 00:45:33.598348 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.599166 kubelet[3213]: E0430 00:45:33.598654 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.599513 kubelet[3213]: E0430 00:45:33.599473 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.599513 kubelet[3213]: W0430 00:45:33.599506 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.599959 kubelet[3213]: E0430 00:45:33.599910 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.600938 kubelet[3213]: E0430 00:45:33.600891 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.600938 kubelet[3213]: W0430 00:45:33.600928 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.601289 kubelet[3213]: E0430 00:45:33.601244 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.602911 kubelet[3213]: E0430 00:45:33.602850 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.602911 kubelet[3213]: W0430 00:45:33.602892 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.604170 kubelet[3213]: E0430 00:45:33.603225 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.604453 kubelet[3213]: E0430 00:45:33.604408 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.604453 kubelet[3213]: W0430 00:45:33.604447 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.604626 kubelet[3213]: E0430 00:45:33.604485 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.606440 kubelet[3213]: E0430 00:45:33.606385 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.606440 kubelet[3213]: W0430 00:45:33.606426 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.606636 kubelet[3213]: E0430 00:45:33.606465 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.608244 kubelet[3213]: E0430 00:45:33.607532 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.608244 kubelet[3213]: W0430 00:45:33.607561 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.608244 kubelet[3213]: E0430 00:45:33.607593 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.609460 kubelet[3213]: E0430 00:45:33.609407 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.609460 kubelet[3213]: W0430 00:45:33.609447 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.609685 kubelet[3213]: E0430 00:45:33.609488 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.610073 kubelet[3213]: E0430 00:45:33.610015 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.610073 kubelet[3213]: W0430 00:45:33.610050 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.610975 kubelet[3213]: E0430 00:45:33.610079 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.611340 kubelet[3213]: E0430 00:45:33.611292 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.611340 kubelet[3213]: W0430 00:45:33.611333 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.611483 kubelet[3213]: E0430 00:45:33.611370 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.613207 kubelet[3213]: E0430 00:45:33.612563 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.613207 kubelet[3213]: W0430 00:45:33.612602 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.613207 kubelet[3213]: E0430 00:45:33.612634 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.614066 kubelet[3213]: E0430 00:45:33.613815 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.614066 kubelet[3213]: W0430 00:45:33.613852 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.614066 kubelet[3213]: E0430 00:45:33.613890 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.617188 kubelet[3213]: E0430 00:45:33.616025 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.617188 kubelet[3213]: W0430 00:45:33.616066 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.617188 kubelet[3213]: E0430 00:45:33.616101 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.650159 containerd[2017]: time="2025-04-30T00:45:33.648298026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:33.650159 containerd[2017]: time="2025-04-30T00:45:33.648392106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:33.650159 containerd[2017]: time="2025-04-30T00:45:33.648461790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:33.650159 containerd[2017]: time="2025-04-30T00:45:33.648721410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:33.700759 kubelet[3213]: E0430 00:45:33.700252 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.700759 kubelet[3213]: W0430 00:45:33.700295 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.700759 kubelet[3213]: E0430 00:45:33.700330 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.702009 kubelet[3213]: E0430 00:45:33.701804 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.702009 kubelet[3213]: W0430 00:45:33.701856 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.702009 kubelet[3213]: E0430 00:45:33.701894 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.703427 kubelet[3213]: E0430 00:45:33.703080 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.703427 kubelet[3213]: W0430 00:45:33.703328 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.703427 kubelet[3213]: E0430 00:45:33.703381 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.703427 kubelet[3213]: I0430 00:45:33.703425 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eb16121b-e738-4b90-8cde-fe4b2beb11f1-registration-dir\") pod \"csi-node-driver-8cg7x\" (UID: \"eb16121b-e738-4b90-8cde-fe4b2beb11f1\") " pod="calico-system/csi-node-driver-8cg7x" Apr 30 00:45:33.706694 kubelet[3213]: E0430 00:45:33.705071 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.706694 kubelet[3213]: W0430 00:45:33.705101 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.706694 kubelet[3213]: E0430 00:45:33.705761 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.706694 kubelet[3213]: I0430 00:45:33.705815 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrxm5\" (UniqueName: \"kubernetes.io/projected/eb16121b-e738-4b90-8cde-fe4b2beb11f1-kube-api-access-mrxm5\") pod \"csi-node-driver-8cg7x\" (UID: \"eb16121b-e738-4b90-8cde-fe4b2beb11f1\") " pod="calico-system/csi-node-driver-8cg7x" Apr 30 00:45:33.704548 systemd[1]: Started cri-containerd-dc136dbebda1487294acba9c069d564acb76bfb33afa50d29f0aad6070faf4d8.scope - libcontainer container dc136dbebda1487294acba9c069d564acb76bfb33afa50d29f0aad6070faf4d8. Apr 30 00:45:33.711380 kubelet[3213]: E0430 00:45:33.709008 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.711380 kubelet[3213]: W0430 00:45:33.709042 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.711380 kubelet[3213]: E0430 00:45:33.709091 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.712366 kubelet[3213]: E0430 00:45:33.712305 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.712366 kubelet[3213]: W0430 00:45:33.712357 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.712651 kubelet[3213]: E0430 00:45:33.712408 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.713448 kubelet[3213]: E0430 00:45:33.713280 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.713448 kubelet[3213]: W0430 00:45:33.713439 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.714317 kubelet[3213]: E0430 00:45:33.713519 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.714467 kubelet[3213]: E0430 00:45:33.714423 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.714467 kubelet[3213]: W0430 00:45:33.714459 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.715355 kubelet[3213]: E0430 00:45:33.714559 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.716197 kubelet[3213]: E0430 00:45:33.716021 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.716197 kubelet[3213]: W0430 00:45:33.716189 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.716803 kubelet[3213]: E0430 00:45:33.716265 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.716803 kubelet[3213]: I0430 00:45:33.716316 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eb16121b-e738-4b90-8cde-fe4b2beb11f1-socket-dir\") pod \"csi-node-driver-8cg7x\" (UID: \"eb16121b-e738-4b90-8cde-fe4b2beb11f1\") " pod="calico-system/csi-node-driver-8cg7x" Apr 30 00:45:33.717195 kubelet[3213]: E0430 00:45:33.717152 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.717195 kubelet[3213]: W0430 00:45:33.717188 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.717541 kubelet[3213]: E0430 00:45:33.717251 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.718237 kubelet[3213]: E0430 00:45:33.718056 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.718237 kubelet[3213]: W0430 00:45:33.718090 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.718681 kubelet[3213]: E0430 00:45:33.718448 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.719283 kubelet[3213]: E0430 00:45:33.718760 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.719283 kubelet[3213]: W0430 00:45:33.718781 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.719283 kubelet[3213]: E0430 00:45:33.718995 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.719804 kubelet[3213]: E0430 00:45:33.719746 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.719804 kubelet[3213]: W0430 00:45:33.719797 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.721154 kubelet[3213]: E0430 00:45:33.720013 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.721154 kubelet[3213]: E0430 00:45:33.720755 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.721154 kubelet[3213]: W0430 00:45:33.720778 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.721154 kubelet[3213]: E0430 00:45:33.721094 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.721689 kubelet[3213]: E0430 00:45:33.721626 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.721689 kubelet[3213]: W0430 00:45:33.721659 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.721913 kubelet[3213]: E0430 00:45:33.721873 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.722761 kubelet[3213]: E0430 00:45:33.722710 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.722761 kubelet[3213]: W0430 00:45:33.722747 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.723010 kubelet[3213]: E0430 00:45:33.722967 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.723616 kubelet[3213]: E0430 00:45:33.723572 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.723616 kubelet[3213]: W0430 00:45:33.723605 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.724120 kubelet[3213]: E0430 00:45:33.724047 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.724725 kubelet[3213]: E0430 00:45:33.724682 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.724725 kubelet[3213]: W0430 00:45:33.724717 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.725183 kubelet[3213]: E0430 00:45:33.724762 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.725604 kubelet[3213]: E0430 00:45:33.725487 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.725604 kubelet[3213]: W0430 00:45:33.725522 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.725604 kubelet[3213]: E0430 00:45:33.725553 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.787838 containerd[2017]: time="2025-04-30T00:45:33.787596054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75f8c8b4b5-hwz72,Uid:3e1c4424-f5a8-4b2e-861e-9722c7faec18,Namespace:calico-system,Attempt:0,} returns sandbox id \"daf4fa6720542569b8039d9b53f49978b5d74615d730a8027179ae3f3cb9099d\"" Apr 30 00:45:33.798542 containerd[2017]: time="2025-04-30T00:45:33.798467898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 00:45:33.825071 kubelet[3213]: E0430 00:45:33.825020 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.825071 kubelet[3213]: W0430 00:45:33.825056 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.825298 kubelet[3213]: E0430 00:45:33.825091 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.826892 kubelet[3213]: E0430 00:45:33.826838 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.826892 kubelet[3213]: W0430 00:45:33.826874 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.827413 kubelet[3213]: E0430 00:45:33.827375 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.827413 kubelet[3213]: W0430 00:45:33.827407 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.827584 kubelet[3213]: E0430 00:45:33.827436 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.829172 kubelet[3213]: E0430 00:45:33.827793 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.829172 kubelet[3213]: E0430 00:45:33.828954 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.829172 kubelet[3213]: W0430 00:45:33.828992 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.830930 kubelet[3213]: E0430 00:45:33.830745 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.831371 kubelet[3213]: E0430 00:45:33.831330 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.831371 kubelet[3213]: W0430 00:45:33.831363 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.831557 kubelet[3213]: E0430 00:45:33.831418 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.833279 kubelet[3213]: E0430 00:45:33.833234 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.833279 kubelet[3213]: W0430 00:45:33.833270 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.833532 kubelet[3213]: E0430 00:45:33.833309 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.835725 kubelet[3213]: E0430 00:45:33.835509 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.835725 kubelet[3213]: W0430 00:45:33.835541 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.835938 kubelet[3213]: E0430 00:45:33.835605 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.836524 kubelet[3213]: E0430 00:45:33.836319 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.836524 kubelet[3213]: W0430 00:45:33.836346 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.836524 kubelet[3213]: E0430 00:45:33.836505 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.837643 kubelet[3213]: E0430 00:45:33.837437 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.837643 kubelet[3213]: W0430 00:45:33.837464 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.837821 kubelet[3213]: E0430 00:45:33.837664 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.838603 kubelet[3213]: E0430 00:45:33.838350 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.838603 kubelet[3213]: W0430 00:45:33.838376 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.838603 kubelet[3213]: E0430 00:45:33.838438 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.839458 kubelet[3213]: E0430 00:45:33.839262 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.839458 kubelet[3213]: W0430 00:45:33.839291 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.839458 kubelet[3213]: E0430 00:45:33.839360 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.840050 kubelet[3213]: E0430 00:45:33.840024 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.840200 kubelet[3213]: W0430 00:45:33.840177 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.840415 kubelet[3213]: E0430 00:45:33.840377 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.840878 kubelet[3213]: E0430 00:45:33.840851 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.841553 kubelet[3213]: W0430 00:45:33.841006 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.841553 kubelet[3213]: E0430 00:45:33.841052 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.843401 kubelet[3213]: E0430 00:45:33.843087 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.843401 kubelet[3213]: W0430 00:45:33.843372 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.843656 kubelet[3213]: E0430 00:45:33.843410 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.846209 kubelet[3213]: E0430 00:45:33.846001 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.846209 kubelet[3213]: W0430 00:45:33.846042 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.846209 kubelet[3213]: E0430 00:45:33.846078 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.873670 kubelet[3213]: E0430 00:45:33.873609 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:33.873670 kubelet[3213]: W0430 00:45:33.873647 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:33.873904 kubelet[3213]: E0430 00:45:33.873712 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:33.879589 containerd[2017]: time="2025-04-30T00:45:33.879530407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b2wnc,Uid:87e01877-d36d-44f2-b180-9cf12c32320f,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc136dbebda1487294acba9c069d564acb76bfb33afa50d29f0aad6070faf4d8\"" Apr 30 00:45:35.331806 kubelet[3213]: E0430 00:45:35.331722 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8cg7x" podUID="eb16121b-e738-4b90-8cde-fe4b2beb11f1" Apr 30 00:45:35.955124 containerd[2017]: time="2025-04-30T00:45:35.955042809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:35.957931 containerd[2017]: time="2025-04-30T00:45:35.957856209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" Apr 30 00:45:35.960522 containerd[2017]: time="2025-04-30T00:45:35.960439401Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:35.965371 containerd[2017]: time="2025-04-30T00:45:35.965282733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:35.967051 containerd[2017]: time="2025-04-30T00:45:35.966878469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.167885235s" Apr 30 00:45:35.967051 containerd[2017]: time="2025-04-30T00:45:35.966933105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" Apr 30 00:45:35.969788 containerd[2017]: time="2025-04-30T00:45:35.969445413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 00:45:35.995256 containerd[2017]: time="2025-04-30T00:45:35.994747113Z" level=info msg="CreateContainer within sandbox \"daf4fa6720542569b8039d9b53f49978b5d74615d730a8027179ae3f3cb9099d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 00:45:36.026917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2399153540.mount: Deactivated successfully. Apr 30 00:45:36.033387 containerd[2017]: time="2025-04-30T00:45:36.033265638Z" level=info msg="CreateContainer within sandbox \"daf4fa6720542569b8039d9b53f49978b5d74615d730a8027179ae3f3cb9099d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a350023c57d729a4d74298c6e027e6d1d89824b8a188c4b8b3800f2f49120f67\"" Apr 30 00:45:36.036158 containerd[2017]: time="2025-04-30T00:45:36.034690266Z" level=info msg="StartContainer for \"a350023c57d729a4d74298c6e027e6d1d89824b8a188c4b8b3800f2f49120f67\"" Apr 30 00:45:36.094440 systemd[1]: Started cri-containerd-a350023c57d729a4d74298c6e027e6d1d89824b8a188c4b8b3800f2f49120f67.scope - libcontainer container a350023c57d729a4d74298c6e027e6d1d89824b8a188c4b8b3800f2f49120f67. Apr 30 00:45:36.185133 containerd[2017]: time="2025-04-30T00:45:36.185040342Z" level=info msg="StartContainer for \"a350023c57d729a4d74298c6e027e6d1d89824b8a188c4b8b3800f2f49120f67\" returns successfully" Apr 30 00:45:36.535256 kubelet[3213]: E0430 00:45:36.535213 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.535256 kubelet[3213]: W0430 00:45:36.535250 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.535946 kubelet[3213]: E0430 00:45:36.535285 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.535946 kubelet[3213]: E0430 00:45:36.535645 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.535946 kubelet[3213]: W0430 00:45:36.535687 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.535946 kubelet[3213]: E0430 00:45:36.535800 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.536246 kubelet[3213]: E0430 00:45:36.536190 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.536246 kubelet[3213]: W0430 00:45:36.536240 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.536385 kubelet[3213]: E0430 00:45:36.536264 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.536701 kubelet[3213]: E0430 00:45:36.536674 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.536775 kubelet[3213]: W0430 00:45:36.536700 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.536775 kubelet[3213]: E0430 00:45:36.536746 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.537186 kubelet[3213]: E0430 00:45:36.537158 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.537271 kubelet[3213]: W0430 00:45:36.537184 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.537271 kubelet[3213]: E0430 00:45:36.537209 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.537579 kubelet[3213]: E0430 00:45:36.537548 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.537653 kubelet[3213]: W0430 00:45:36.537594 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.537653 kubelet[3213]: E0430 00:45:36.537616 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.538008 kubelet[3213]: E0430 00:45:36.537984 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.538008 kubelet[3213]: W0430 00:45:36.538028 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.538008 kubelet[3213]: E0430 00:45:36.538051 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.538483 kubelet[3213]: E0430 00:45:36.538457 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.538553 kubelet[3213]: W0430 00:45:36.538486 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.538553 kubelet[3213]: E0430 00:45:36.538538 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.538915 kubelet[3213]: E0430 00:45:36.538890 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.538915 kubelet[3213]: W0430 00:45:36.538914 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.539061 kubelet[3213]: E0430 00:45:36.538935 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.539288 kubelet[3213]: E0430 00:45:36.539274 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.539348 kubelet[3213]: W0430 00:45:36.539294 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.539348 kubelet[3213]: E0430 00:45:36.539316 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.540274 kubelet[3213]: E0430 00:45:36.540094 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.540274 kubelet[3213]: W0430 00:45:36.540266 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.540559 kubelet[3213]: E0430 00:45:36.540297 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.541145 kubelet[3213]: E0430 00:45:36.540707 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.541145 kubelet[3213]: W0430 00:45:36.540737 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.541145 kubelet[3213]: E0430 00:45:36.540763 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.541145 kubelet[3213]: E0430 00:45:36.541134 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.541407 kubelet[3213]: W0430 00:45:36.541152 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.541407 kubelet[3213]: E0430 00:45:36.541176 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.541523 kubelet[3213]: E0430 00:45:36.541487 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.541523 kubelet[3213]: W0430 00:45:36.541503 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.541647 kubelet[3213]: E0430 00:45:36.541523 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.542302 kubelet[3213]: E0430 00:45:36.541789 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.542302 kubelet[3213]: W0430 00:45:36.541816 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.542302 kubelet[3213]: E0430 00:45:36.541838 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.563396 kubelet[3213]: E0430 00:45:36.563361 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.563707 kubelet[3213]: W0430 00:45:36.563545 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.563707 kubelet[3213]: E0430 00:45:36.563581 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.564479 kubelet[3213]: E0430 00:45:36.564308 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.564479 kubelet[3213]: W0430 00:45:36.564333 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.564479 kubelet[3213]: E0430 00:45:36.564368 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.565317 kubelet[3213]: E0430 00:45:36.565059 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.565317 kubelet[3213]: W0430 00:45:36.565079 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.565317 kubelet[3213]: E0430 00:45:36.565140 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.565883 kubelet[3213]: E0430 00:45:36.565728 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.565883 kubelet[3213]: W0430 00:45:36.565758 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.566437 kubelet[3213]: E0430 00:45:36.566287 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.566437 kubelet[3213]: W0430 00:45:36.566309 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.566842 kubelet[3213]: E0430 00:45:36.566747 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.566842 kubelet[3213]: E0430 00:45:36.566793 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.567281 kubelet[3213]: E0430 00:45:36.567045 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.567281 kubelet[3213]: W0430 00:45:36.567068 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.567281 kubelet[3213]: E0430 00:45:36.567094 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.567952 kubelet[3213]: E0430 00:45:36.567729 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.567952 kubelet[3213]: W0430 00:45:36.567775 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.567952 kubelet[3213]: E0430 00:45:36.567802 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.568713 kubelet[3213]: E0430 00:45:36.568503 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.568713 kubelet[3213]: W0430 00:45:36.568533 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.568895 kubelet[3213]: E0430 00:45:36.568840 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.569472 kubelet[3213]: E0430 00:45:36.569270 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.569472 kubelet[3213]: W0430 00:45:36.569294 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.569472 kubelet[3213]: E0430 00:45:36.569344 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.570041 kubelet[3213]: E0430 00:45:36.569816 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.570041 kubelet[3213]: W0430 00:45:36.569839 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.570041 kubelet[3213]: E0430 00:45:36.569884 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.570392 kubelet[3213]: E0430 00:45:36.570369 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.570493 kubelet[3213]: W0430 00:45:36.570472 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.570612 kubelet[3213]: E0430 00:45:36.570589 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.570946 kubelet[3213]: E0430 00:45:36.570907 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.570946 kubelet[3213]: W0430 00:45:36.570937 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.571082 kubelet[3213]: E0430 00:45:36.570963 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.571360 kubelet[3213]: E0430 00:45:36.571332 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.571426 kubelet[3213]: W0430 00:45:36.571359 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.571426 kubelet[3213]: E0430 00:45:36.571401 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.572025 kubelet[3213]: E0430 00:45:36.571980 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.572025 kubelet[3213]: W0430 00:45:36.572011 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.572243 kubelet[3213]: E0430 00:45:36.572047 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.572474 kubelet[3213]: E0430 00:45:36.572447 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.572536 kubelet[3213]: W0430 00:45:36.572473 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.572536 kubelet[3213]: E0430 00:45:36.572514 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.572922 kubelet[3213]: E0430 00:45:36.572896 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.573005 kubelet[3213]: W0430 00:45:36.572921 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.573005 kubelet[3213]: E0430 00:45:36.572962 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.574043 kubelet[3213]: E0430 00:45:36.573975 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.574043 kubelet[3213]: W0430 00:45:36.574031 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.574268 kubelet[3213]: E0430 00:45:36.574074 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:36.574643 kubelet[3213]: E0430 00:45:36.574601 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 00:45:36.574643 kubelet[3213]: W0430 00:45:36.574632 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 00:45:36.574793 kubelet[3213]: E0430 00:45:36.574664 3213 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 00:45:37.245952 containerd[2017]: time="2025-04-30T00:45:37.245875832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:37.248338 containerd[2017]: time="2025-04-30T00:45:37.248267648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" Apr 30 00:45:37.250654 containerd[2017]: time="2025-04-30T00:45:37.250564208Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:37.255253 containerd[2017]: time="2025-04-30T00:45:37.255176360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:37.256702 containerd[2017]: time="2025-04-30T00:45:37.256652144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.285345891s" Apr 30 00:45:37.257097 containerd[2017]: time="2025-04-30T00:45:37.256810232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" Apr 30 00:45:37.260804 containerd[2017]: time="2025-04-30T00:45:37.260748680Z" level=info msg="CreateContainer within sandbox \"dc136dbebda1487294acba9c069d564acb76bfb33afa50d29f0aad6070faf4d8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 00:45:37.290634 containerd[2017]: time="2025-04-30T00:45:37.290465552Z" level=info msg="CreateContainer within sandbox \"dc136dbebda1487294acba9c069d564acb76bfb33afa50d29f0aad6070faf4d8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6c961313850738f69408aaf5b5c94dcd918511701fd99a1130494e54948acfef\"" Apr 30 00:45:37.293238 containerd[2017]: time="2025-04-30T00:45:37.291263540Z" level=info msg="StartContainer for \"6c961313850738f69408aaf5b5c94dcd918511701fd99a1130494e54948acfef\"" Apr 30 00:45:37.332278 kubelet[3213]: E0430 00:45:37.331645 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8cg7x" podUID="eb16121b-e738-4b90-8cde-fe4b2beb11f1" Apr 30 00:45:37.364417 systemd[1]: Started cri-containerd-6c961313850738f69408aaf5b5c94dcd918511701fd99a1130494e54948acfef.scope - libcontainer container 6c961313850738f69408aaf5b5c94dcd918511701fd99a1130494e54948acfef. Apr 30 00:45:37.444243 containerd[2017]: time="2025-04-30T00:45:37.444181281Z" level=info msg="StartContainer for \"6c961313850738f69408aaf5b5c94dcd918511701fd99a1130494e54948acfef\" returns successfully" Apr 30 00:45:37.470000 systemd[1]: cri-containerd-6c961313850738f69408aaf5b5c94dcd918511701fd99a1130494e54948acfef.scope: Deactivated successfully. Apr 30 00:45:37.504493 kubelet[3213]: I0430 00:45:37.502889 3213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:45:37.540964 kubelet[3213]: I0430 00:45:37.540884 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75f8c8b4b5-hwz72" podStartSLOduration=3.366615978 podStartE2EDuration="5.540859929s" podCreationTimestamp="2025-04-30 00:45:32 +0000 UTC" firstStartedPulling="2025-04-30 00:45:33.794677986 +0000 UTC m=+18.754215490" lastFinishedPulling="2025-04-30 00:45:35.968921853 +0000 UTC m=+20.928459441" observedRunningTime="2025-04-30 00:45:36.510492296 +0000 UTC m=+21.470029824" watchObservedRunningTime="2025-04-30 00:45:37.540859929 +0000 UTC m=+22.500397445" Apr 30 00:45:37.713388 containerd[2017]: time="2025-04-30T00:45:37.713261170Z" level=info msg="shim disconnected" id=6c961313850738f69408aaf5b5c94dcd918511701fd99a1130494e54948acfef namespace=k8s.io Apr 30 00:45:37.713388 containerd[2017]: time="2025-04-30T00:45:37.713372266Z" level=warning msg="cleaning up after shim disconnected" id=6c961313850738f69408aaf5b5c94dcd918511701fd99a1130494e54948acfef namespace=k8s.io Apr 30 00:45:37.713823 containerd[2017]: time="2025-04-30T00:45:37.713421730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:45:37.978775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c961313850738f69408aaf5b5c94dcd918511701fd99a1130494e54948acfef-rootfs.mount: Deactivated successfully. Apr 30 00:45:38.512809 containerd[2017]: time="2025-04-30T00:45:38.512720218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 00:45:39.333560 kubelet[3213]: E0430 00:45:39.333041 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8cg7x" podUID="eb16121b-e738-4b90-8cde-fe4b2beb11f1" Apr 30 00:45:41.331671 kubelet[3213]: E0430 00:45:41.331570 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8cg7x" podUID="eb16121b-e738-4b90-8cde-fe4b2beb11f1" Apr 30 00:45:42.028499 containerd[2017]: time="2025-04-30T00:45:42.028440515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:42.030729 containerd[2017]: time="2025-04-30T00:45:42.030635303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" Apr 30 00:45:42.032799 containerd[2017]: time="2025-04-30T00:45:42.032723423Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:42.037678 containerd[2017]: time="2025-04-30T00:45:42.037581083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:42.039266 containerd[2017]: time="2025-04-30T00:45:42.039214259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.526402625s" Apr 30 00:45:42.039378 containerd[2017]: time="2025-04-30T00:45:42.039269735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" Apr 30 00:45:42.044460 containerd[2017]: time="2025-04-30T00:45:42.044152091Z" level=info msg="CreateContainer within sandbox \"dc136dbebda1487294acba9c069d564acb76bfb33afa50d29f0aad6070faf4d8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 00:45:42.078093 containerd[2017]: time="2025-04-30T00:45:42.078029160Z" level=info msg="CreateContainer within sandbox \"dc136dbebda1487294acba9c069d564acb76bfb33afa50d29f0aad6070faf4d8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1f23ed6802b1f2ad424a62ef693c732730c10ab5d5fe4d1e55fae5620a8db27c\"" Apr 30 00:45:42.079310 containerd[2017]: time="2025-04-30T00:45:42.079225080Z" level=info msg="StartContainer for \"1f23ed6802b1f2ad424a62ef693c732730c10ab5d5fe4d1e55fae5620a8db27c\"" Apr 30 00:45:42.142438 systemd[1]: Started cri-containerd-1f23ed6802b1f2ad424a62ef693c732730c10ab5d5fe4d1e55fae5620a8db27c.scope - libcontainer container 1f23ed6802b1f2ad424a62ef693c732730c10ab5d5fe4d1e55fae5620a8db27c. Apr 30 00:45:42.199734 containerd[2017]: time="2025-04-30T00:45:42.199520664Z" level=info msg="StartContainer for \"1f23ed6802b1f2ad424a62ef693c732730c10ab5d5fe4d1e55fae5620a8db27c\" returns successfully" Apr 30 00:45:43.117490 containerd[2017]: time="2025-04-30T00:45:43.117413149Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:45:43.122554 systemd[1]: cri-containerd-1f23ed6802b1f2ad424a62ef693c732730c10ab5d5fe4d1e55fae5620a8db27c.scope: Deactivated successfully. Apr 30 00:45:43.162305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f23ed6802b1f2ad424a62ef693c732730c10ab5d5fe4d1e55fae5620a8db27c-rootfs.mount: Deactivated successfully. Apr 30 00:45:43.183309 kubelet[3213]: I0430 00:45:43.183250 3213 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 00:45:43.257230 systemd[1]: Created slice kubepods-burstable-pod03ba2d14_056d_41e7_b778_1e52ace77241.slice - libcontainer container kubepods-burstable-pod03ba2d14_056d_41e7_b778_1e52ace77241.slice. Apr 30 00:45:43.281360 systemd[1]: Created slice kubepods-burstable-pod9e2cbcdd_5e79_4f1d_b433_3d65cef0e3e1.slice - libcontainer container kubepods-burstable-pod9e2cbcdd_5e79_4f1d_b433_3d65cef0e3e1.slice. Apr 30 00:45:43.301220 systemd[1]: Created slice kubepods-besteffort-podd404a0b8_b738_4be9_a74e_856f784c975d.slice - libcontainer container kubepods-besteffort-podd404a0b8_b738_4be9_a74e_856f784c975d.slice. Apr 30 00:45:43.314775 kubelet[3213]: I0430 00:45:43.314707 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2k2q\" (UniqueName: \"kubernetes.io/projected/d404a0b8-b738-4be9-a74e-856f784c975d-kube-api-access-f2k2q\") pod \"calico-apiserver-685b87fb46-zcfxw\" (UID: \"d404a0b8-b738-4be9-a74e-856f784c975d\") " pod="calico-apiserver/calico-apiserver-685b87fb46-zcfxw" Apr 30 00:45:43.314945 kubelet[3213]: I0430 00:45:43.314782 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91b11be6-074e-42d1-997b-badf85674f33-calico-apiserver-certs\") pod \"calico-apiserver-685b87fb46-6z5zg\" (UID: \"91b11be6-074e-42d1-997b-badf85674f33\") " pod="calico-apiserver/calico-apiserver-685b87fb46-6z5zg" Apr 30 00:45:43.314945 kubelet[3213]: I0430 00:45:43.314824 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1-config-volume\") pod \"coredns-668d6bf9bc-2wbvz\" (UID: \"9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1\") " pod="kube-system/coredns-668d6bf9bc-2wbvz" Apr 30 00:45:43.314945 kubelet[3213]: I0430 00:45:43.314871 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qppz\" (UniqueName: \"kubernetes.io/projected/91b11be6-074e-42d1-997b-badf85674f33-kube-api-access-7qppz\") pod \"calico-apiserver-685b87fb46-6z5zg\" (UID: \"91b11be6-074e-42d1-997b-badf85674f33\") " pod="calico-apiserver/calico-apiserver-685b87fb46-6z5zg" Apr 30 00:45:43.314945 kubelet[3213]: I0430 00:45:43.314918 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzksn\" (UniqueName: \"kubernetes.io/projected/da044bc6-8690-4c3f-9ab1-4db58205c9ed-kube-api-access-xzksn\") pod \"calico-kube-controllers-7bfc54cc8f-c7qvh\" (UID: \"da044bc6-8690-4c3f-9ab1-4db58205c9ed\") " pod="calico-system/calico-kube-controllers-7bfc54cc8f-c7qvh" Apr 30 00:45:43.315221 kubelet[3213]: I0430 00:45:43.314962 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqmxs\" (UniqueName: \"kubernetes.io/projected/9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1-kube-api-access-xqmxs\") pod \"coredns-668d6bf9bc-2wbvz\" (UID: \"9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1\") " pod="kube-system/coredns-668d6bf9bc-2wbvz" Apr 30 00:45:43.315221 kubelet[3213]: I0430 00:45:43.315010 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssp8m\" (UniqueName: \"kubernetes.io/projected/03ba2d14-056d-41e7-b778-1e52ace77241-kube-api-access-ssp8m\") pod \"coredns-668d6bf9bc-cgtrr\" (UID: \"03ba2d14-056d-41e7-b778-1e52ace77241\") " pod="kube-system/coredns-668d6bf9bc-cgtrr" Apr 30 00:45:43.315221 kubelet[3213]: I0430 00:45:43.315049 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da044bc6-8690-4c3f-9ab1-4db58205c9ed-tigera-ca-bundle\") pod \"calico-kube-controllers-7bfc54cc8f-c7qvh\" (UID: \"da044bc6-8690-4c3f-9ab1-4db58205c9ed\") " pod="calico-system/calico-kube-controllers-7bfc54cc8f-c7qvh" Apr 30 00:45:43.318358 kubelet[3213]: I0430 00:45:43.315086 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03ba2d14-056d-41e7-b778-1e52ace77241-config-volume\") pod \"coredns-668d6bf9bc-cgtrr\" (UID: \"03ba2d14-056d-41e7-b778-1e52ace77241\") " pod="kube-system/coredns-668d6bf9bc-cgtrr" Apr 30 00:45:43.318358 kubelet[3213]: I0430 00:45:43.317662 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d404a0b8-b738-4be9-a74e-856f784c975d-calico-apiserver-certs\") pod \"calico-apiserver-685b87fb46-zcfxw\" (UID: \"d404a0b8-b738-4be9-a74e-856f784c975d\") " pod="calico-apiserver/calico-apiserver-685b87fb46-zcfxw" Apr 30 00:45:43.319502 systemd[1]: Created slice kubepods-besteffort-podda044bc6_8690_4c3f_9ab1_4db58205c9ed.slice - libcontainer container kubepods-besteffort-podda044bc6_8690_4c3f_9ab1_4db58205c9ed.slice. Apr 30 00:45:43.362981 systemd[1]: Created slice kubepods-besteffort-pod91b11be6_074e_42d1_997b_badf85674f33.slice - libcontainer container kubepods-besteffort-pod91b11be6_074e_42d1_997b_badf85674f33.slice. Apr 30 00:45:43.382757 systemd[1]: Created slice kubepods-besteffort-podeb16121b_e738_4b90_8cde_fe4b2beb11f1.slice - libcontainer container kubepods-besteffort-podeb16121b_e738_4b90_8cde_fe4b2beb11f1.slice. Apr 30 00:45:43.392621 containerd[2017]: time="2025-04-30T00:45:43.391833386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8cg7x,Uid:eb16121b-e738-4b90-8cde-fe4b2beb11f1,Namespace:calico-system,Attempt:0,}" Apr 30 00:45:43.610739 containerd[2017]: time="2025-04-30T00:45:43.610666587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2wbvz,Uid:9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1,Namespace:kube-system,Attempt:0,}" Apr 30 00:45:43.611170 containerd[2017]: time="2025-04-30T00:45:43.611085519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cgtrr,Uid:03ba2d14-056d-41e7-b778-1e52ace77241,Namespace:kube-system,Attempt:0,}" Apr 30 00:45:43.613780 containerd[2017]: time="2025-04-30T00:45:43.613665459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685b87fb46-zcfxw,Uid:d404a0b8-b738-4be9-a74e-856f784c975d,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:45:43.643203 containerd[2017]: time="2025-04-30T00:45:43.642989811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfc54cc8f-c7qvh,Uid:da044bc6-8690-4c3f-9ab1-4db58205c9ed,Namespace:calico-system,Attempt:0,}" Apr 30 00:45:43.673738 containerd[2017]: time="2025-04-30T00:45:43.673675767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685b87fb46-6z5zg,Uid:91b11be6-074e-42d1-997b-badf85674f33,Namespace:calico-apiserver,Attempt:0,}" Apr 30 00:45:44.100677 containerd[2017]: time="2025-04-30T00:45:44.100354250Z" level=info msg="shim disconnected" id=1f23ed6802b1f2ad424a62ef693c732730c10ab5d5fe4d1e55fae5620a8db27c namespace=k8s.io Apr 30 00:45:44.100677 containerd[2017]: time="2025-04-30T00:45:44.100426634Z" level=warning msg="cleaning up after shim disconnected" id=1f23ed6802b1f2ad424a62ef693c732730c10ab5d5fe4d1e55fae5620a8db27c namespace=k8s.io Apr 30 00:45:44.100677 containerd[2017]: time="2025-04-30T00:45:44.100448690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:45:44.433383 containerd[2017]: time="2025-04-30T00:45:44.433141119Z" level=error msg="Failed to destroy network for sandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.435411 containerd[2017]: time="2025-04-30T00:45:44.434992263Z" level=error msg="encountered an error cleaning up failed sandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.435411 containerd[2017]: time="2025-04-30T00:45:44.435093663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8cg7x,Uid:eb16121b-e738-4b90-8cde-fe4b2beb11f1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.437033 kubelet[3213]: E0430 00:45:44.436363 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.437033 kubelet[3213]: E0430 00:45:44.436462 3213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8cg7x" Apr 30 00:45:44.437033 kubelet[3213]: E0430 00:45:44.436499 3213 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8cg7x" Apr 30 00:45:44.438945 kubelet[3213]: E0430 00:45:44.436581 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8cg7x_calico-system(eb16121b-e738-4b90-8cde-fe4b2beb11f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8cg7x_calico-system(eb16121b-e738-4b90-8cde-fe4b2beb11f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8cg7x" podUID="eb16121b-e738-4b90-8cde-fe4b2beb11f1" Apr 30 00:45:44.564007 kubelet[3213]: I0430 00:45:44.561399 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:45:44.564679 containerd[2017]: time="2025-04-30T00:45:44.564614728Z" level=info msg="StopPodSandbox for \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\"" Apr 30 00:45:44.570089 containerd[2017]: time="2025-04-30T00:45:44.567872068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 00:45:44.573139 containerd[2017]: time="2025-04-30T00:45:44.569101552Z" level=info msg="Ensure that sandbox f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0 in task-service has been cleanup successfully" Apr 30 00:45:44.589027 containerd[2017]: time="2025-04-30T00:45:44.588940696Z" level=error msg="Failed to destroy network for sandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.595887 containerd[2017]: time="2025-04-30T00:45:44.594822964Z" level=error msg="encountered an error cleaning up failed sandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.597150 containerd[2017]: time="2025-04-30T00:45:44.596347408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cgtrr,Uid:03ba2d14-056d-41e7-b778-1e52ace77241,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.601226 kubelet[3213]: E0430 00:45:44.599937 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.601705 kubelet[3213]: E0430 00:45:44.601535 3213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cgtrr" Apr 30 00:45:44.601705 kubelet[3213]: E0430 00:45:44.601593 3213 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cgtrr" Apr 30 00:45:44.603246 kubelet[3213]: E0430 00:45:44.601961 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cgtrr_kube-system(03ba2d14-056d-41e7-b778-1e52ace77241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cgtrr_kube-system(03ba2d14-056d-41e7-b778-1e52ace77241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cgtrr" podUID="03ba2d14-056d-41e7-b778-1e52ace77241" Apr 30 00:45:44.633985 containerd[2017]: time="2025-04-30T00:45:44.633556972Z" level=error msg="Failed to destroy network for sandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.636270 containerd[2017]: time="2025-04-30T00:45:44.636185920Z" level=error msg="encountered an error cleaning up failed sandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.637341 containerd[2017]: time="2025-04-30T00:45:44.636299728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685b87fb46-6z5zg,Uid:91b11be6-074e-42d1-997b-badf85674f33,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.638349 kubelet[3213]: E0430 00:45:44.636606 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.638349 kubelet[3213]: E0430 00:45:44.636680 3213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-685b87fb46-6z5zg" Apr 30 00:45:44.638349 kubelet[3213]: E0430 00:45:44.636718 3213 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-685b87fb46-6z5zg" Apr 30 00:45:44.638552 kubelet[3213]: E0430 00:45:44.636779 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-685b87fb46-6z5zg_calico-apiserver(91b11be6-074e-42d1-997b-badf85674f33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-685b87fb46-6z5zg_calico-apiserver(91b11be6-074e-42d1-997b-badf85674f33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-685b87fb46-6z5zg" podUID="91b11be6-074e-42d1-997b-badf85674f33" Apr 30 00:45:44.654952 containerd[2017]: time="2025-04-30T00:45:44.654074812Z" level=error msg="Failed to destroy network for sandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.654952 containerd[2017]: time="2025-04-30T00:45:44.654722344Z" level=error msg="encountered an error cleaning up failed sandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.654952 containerd[2017]: time="2025-04-30T00:45:44.654800692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685b87fb46-zcfxw,Uid:d404a0b8-b738-4be9-a74e-856f784c975d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.655276 kubelet[3213]: E0430 00:45:44.655212 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.655388 kubelet[3213]: E0430 00:45:44.655293 3213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-685b87fb46-zcfxw" Apr 30 00:45:44.655388 kubelet[3213]: E0430 00:45:44.655327 3213 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-685b87fb46-zcfxw" Apr 30 00:45:44.655505 kubelet[3213]: E0430 00:45:44.655386 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-685b87fb46-zcfxw_calico-apiserver(d404a0b8-b738-4be9-a74e-856f784c975d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-685b87fb46-zcfxw_calico-apiserver(d404a0b8-b738-4be9-a74e-856f784c975d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-685b87fb46-zcfxw" podUID="d404a0b8-b738-4be9-a74e-856f784c975d" Apr 30 00:45:44.663566 containerd[2017]: time="2025-04-30T00:45:44.660463828Z" level=error msg="Failed to destroy network for sandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.663566 containerd[2017]: time="2025-04-30T00:45:44.662159956Z" level=error msg="encountered an error cleaning up failed sandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.663566 containerd[2017]: time="2025-04-30T00:45:44.662368432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2wbvz,Uid:9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.664622 kubelet[3213]: E0430 00:45:44.664545 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.664766 kubelet[3213]: E0430 00:45:44.664628 3213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2wbvz" Apr 30 00:45:44.664766 kubelet[3213]: E0430 00:45:44.664672 3213 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2wbvz" Apr 30 00:45:44.664766 kubelet[3213]: E0430 00:45:44.664732 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2wbvz_kube-system(9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2wbvz_kube-system(9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2wbvz" podUID="9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1" Apr 30 00:45:44.678185 containerd[2017]: time="2025-04-30T00:45:44.678050608Z" level=error msg="Failed to destroy network for sandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.678961 containerd[2017]: time="2025-04-30T00:45:44.678864064Z" level=error msg="encountered an error cleaning up failed sandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.679071 containerd[2017]: time="2025-04-30T00:45:44.679024336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfc54cc8f-c7qvh,Uid:da044bc6-8690-4c3f-9ab1-4db58205c9ed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.679448 kubelet[3213]: E0430 00:45:44.679402 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.680092 kubelet[3213]: E0430 00:45:44.679590 3213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bfc54cc8f-c7qvh" Apr 30 00:45:44.680092 kubelet[3213]: E0430 00:45:44.679634 3213 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bfc54cc8f-c7qvh" Apr 30 00:45:44.680092 kubelet[3213]: E0430 00:45:44.679722 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bfc54cc8f-c7qvh_calico-system(da044bc6-8690-4c3f-9ab1-4db58205c9ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bfc54cc8f-c7qvh_calico-system(da044bc6-8690-4c3f-9ab1-4db58205c9ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bfc54cc8f-c7qvh" podUID="da044bc6-8690-4c3f-9ab1-4db58205c9ed" Apr 30 00:45:44.698431 containerd[2017]: time="2025-04-30T00:45:44.698151893Z" level=error msg="StopPodSandbox for \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\" failed" error="failed to destroy network for sandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:44.699352 kubelet[3213]: E0430 00:45:44.698937 3213 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:45:44.699352 kubelet[3213]: E0430 00:45:44.699060 3213 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0"} Apr 30 00:45:44.699352 kubelet[3213]: E0430 00:45:44.699197 3213 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb16121b-e738-4b90-8cde-fe4b2beb11f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:45:44.699352 kubelet[3213]: E0430 00:45:44.699266 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb16121b-e738-4b90-8cde-fe4b2beb11f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8cg7x" podUID="eb16121b-e738-4b90-8cde-fe4b2beb11f1" Apr 30 00:45:45.161678 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678-shm.mount: Deactivated successfully. Apr 30 00:45:45.162537 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532-shm.mount: Deactivated successfully. Apr 30 00:45:45.162692 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55-shm.mount: Deactivated successfully. Apr 30 00:45:45.162839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0-shm.mount: Deactivated successfully. Apr 30 00:45:45.567760 kubelet[3213]: I0430 00:45:45.566885 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:45:45.570783 containerd[2017]: time="2025-04-30T00:45:45.569181257Z" level=info msg="StopPodSandbox for \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\"" Apr 30 00:45:45.570783 containerd[2017]: time="2025-04-30T00:45:45.570050981Z" level=info msg="Ensure that sandbox c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55 in task-service has been cleanup successfully" Apr 30 00:45:45.573281 kubelet[3213]: I0430 00:45:45.573226 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:45:45.576234 containerd[2017]: time="2025-04-30T00:45:45.575368685Z" level=info msg="StopPodSandbox for \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\"" Apr 30 00:45:45.578133 containerd[2017]: time="2025-04-30T00:45:45.576991169Z" level=info msg="Ensure that sandbox c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267 in task-service has been cleanup successfully" Apr 30 00:45:45.580075 kubelet[3213]: I0430 00:45:45.579897 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:45:45.584285 containerd[2017]: time="2025-04-30T00:45:45.584153549Z" level=info msg="StopPodSandbox for \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\"" Apr 30 00:45:45.584999 containerd[2017]: time="2025-04-30T00:45:45.584464265Z" level=info msg="Ensure that sandbox 26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028 in task-service has been cleanup successfully" Apr 30 00:45:45.594880 kubelet[3213]: I0430 00:45:45.594814 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:45:45.601372 containerd[2017]: time="2025-04-30T00:45:45.601302737Z" level=info msg="StopPodSandbox for \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\"" Apr 30 00:45:45.602607 containerd[2017]: time="2025-04-30T00:45:45.601588349Z" level=info msg="Ensure that sandbox 85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532 in task-service has been cleanup successfully" Apr 30 00:45:45.611480 kubelet[3213]: I0430 00:45:45.611386 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:45:45.617307 containerd[2017]: time="2025-04-30T00:45:45.617071349Z" level=info msg="StopPodSandbox for \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\"" Apr 30 00:45:45.619517 containerd[2017]: time="2025-04-30T00:45:45.619442513Z" level=info msg="Ensure that sandbox 9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678 in task-service has been cleanup successfully" Apr 30 00:45:45.776136 containerd[2017]: time="2025-04-30T00:45:45.775987122Z" level=error msg="StopPodSandbox for \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\" failed" error="failed to destroy network for sandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:45.776837 kubelet[3213]: E0430 00:45:45.776615 3213 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:45:45.776837 kubelet[3213]: E0430 00:45:45.776684 3213 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678"} Apr 30 00:45:45.776837 kubelet[3213]: E0430 00:45:45.776746 3213 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:45:45.776837 kubelet[3213]: E0430 00:45:45.776785 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2wbvz" podUID="9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1" Apr 30 00:45:45.834944 containerd[2017]: time="2025-04-30T00:45:45.834843138Z" level=error msg="StopPodSandbox for \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\" failed" error="failed to destroy network for sandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:45.836709 kubelet[3213]: E0430 00:45:45.836443 3213 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:45:45.836709 kubelet[3213]: E0430 00:45:45.836528 3213 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55"} Apr 30 00:45:45.836709 kubelet[3213]: E0430 00:45:45.836588 3213 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91b11be6-074e-42d1-997b-badf85674f33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:45:45.836709 kubelet[3213]: E0430 00:45:45.836629 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91b11be6-074e-42d1-997b-badf85674f33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-685b87fb46-6z5zg" podUID="91b11be6-074e-42d1-997b-badf85674f33" Apr 30 00:45:45.841900 containerd[2017]: time="2025-04-30T00:45:45.841822422Z" level=error msg="StopPodSandbox for \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\" failed" error="failed to destroy network for sandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:45.842083 containerd[2017]: time="2025-04-30T00:45:45.842046510Z" level=error msg="StopPodSandbox for \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\" failed" error="failed to destroy network for sandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:45.842787 kubelet[3213]: E0430 00:45:45.842352 3213 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:45:45.842787 kubelet[3213]: E0430 00:45:45.842421 3213 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267"} Apr 30 00:45:45.842787 kubelet[3213]: E0430 00:45:45.842479 3213 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da044bc6-8690-4c3f-9ab1-4db58205c9ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:45:45.842787 kubelet[3213]: E0430 00:45:45.842517 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da044bc6-8690-4c3f-9ab1-4db58205c9ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bfc54cc8f-c7qvh" podUID="da044bc6-8690-4c3f-9ab1-4db58205c9ed" Apr 30 00:45:45.843412 kubelet[3213]: E0430 00:45:45.842568 3213 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:45:45.843412 kubelet[3213]: E0430 00:45:45.842600 3213 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028"} Apr 30 00:45:45.843412 kubelet[3213]: E0430 00:45:45.842641 3213 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d404a0b8-b738-4be9-a74e-856f784c975d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:45:45.843412 kubelet[3213]: E0430 00:45:45.842678 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d404a0b8-b738-4be9-a74e-856f784c975d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-685b87fb46-zcfxw" podUID="d404a0b8-b738-4be9-a74e-856f784c975d" Apr 30 00:45:45.849869 containerd[2017]: time="2025-04-30T00:45:45.849782754Z" level=error msg="StopPodSandbox for \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\" failed" error="failed to destroy network for sandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 00:45:45.850540 kubelet[3213]: E0430 00:45:45.850312 3213 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:45:45.850540 kubelet[3213]: E0430 00:45:45.850407 3213 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532"} Apr 30 00:45:45.850540 kubelet[3213]: E0430 00:45:45.850495 3213 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03ba2d14-056d-41e7-b778-1e52ace77241\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 00:45:45.850949 kubelet[3213]: E0430 00:45:45.850863 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03ba2d14-056d-41e7-b778-1e52ace77241\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cgtrr" podUID="03ba2d14-056d-41e7-b778-1e52ace77241" Apr 30 00:45:50.573578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667395922.mount: Deactivated successfully. Apr 30 00:45:50.640227 containerd[2017]: time="2025-04-30T00:45:50.639951970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:50.643820 containerd[2017]: time="2025-04-30T00:45:50.643286242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" Apr 30 00:45:50.646453 containerd[2017]: time="2025-04-30T00:45:50.646333618Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:50.650802 containerd[2017]: time="2025-04-30T00:45:50.650722858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:50.653011 containerd[2017]: time="2025-04-30T00:45:50.652315138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 6.082192254s" Apr 30 00:45:50.653011 containerd[2017]: time="2025-04-30T00:45:50.652402966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" Apr 30 00:45:50.687577 containerd[2017]: time="2025-04-30T00:45:50.687484462Z" level=info msg="CreateContainer within sandbox \"dc136dbebda1487294acba9c069d564acb76bfb33afa50d29f0aad6070faf4d8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 00:45:50.733313 containerd[2017]: time="2025-04-30T00:45:50.732980303Z" level=info msg="CreateContainer within sandbox \"dc136dbebda1487294acba9c069d564acb76bfb33afa50d29f0aad6070faf4d8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9848c2eecaa5be551c2c05daefe5eab236f59d656b0970bc0740eb475ecfaf5e\"" Apr 30 00:45:50.733617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1508143611.mount: Deactivated successfully. Apr 30 00:45:50.736982 containerd[2017]: time="2025-04-30T00:45:50.736678163Z" level=info msg="StartContainer for \"9848c2eecaa5be551c2c05daefe5eab236f59d656b0970bc0740eb475ecfaf5e\"" Apr 30 00:45:50.791447 systemd[1]: Started cri-containerd-9848c2eecaa5be551c2c05daefe5eab236f59d656b0970bc0740eb475ecfaf5e.scope - libcontainer container 9848c2eecaa5be551c2c05daefe5eab236f59d656b0970bc0740eb475ecfaf5e. Apr 30 00:45:50.860347 containerd[2017]: time="2025-04-30T00:45:50.860021207Z" level=info msg="StartContainer for \"9848c2eecaa5be551c2c05daefe5eab236f59d656b0970bc0740eb475ecfaf5e\" returns successfully" Apr 30 00:45:50.993457 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 00:45:50.993662 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 00:45:51.673716 kubelet[3213]: I0430 00:45:51.673192 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b2wnc" podStartSLOduration=1.901042736 podStartE2EDuration="18.673170047s" podCreationTimestamp="2025-04-30 00:45:33 +0000 UTC" firstStartedPulling="2025-04-30 00:45:33.881931235 +0000 UTC m=+18.841468739" lastFinishedPulling="2025-04-30 00:45:50.654058546 +0000 UTC m=+35.613596050" observedRunningTime="2025-04-30 00:45:51.672783479 +0000 UTC m=+36.632320995" watchObservedRunningTime="2025-04-30 00:45:51.673170047 +0000 UTC m=+36.632707575" Apr 30 00:45:55.336900 containerd[2017]: time="2025-04-30T00:45:55.335782897Z" level=info msg="StopPodSandbox for \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\"" Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.470 [INFO][4822] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.471 [INFO][4822] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" iface="eth0" netns="/var/run/netns/cni-d2ef64cf-cdb2-a542-c542-1b1ae76980ab" Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.474 [INFO][4822] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" iface="eth0" netns="/var/run/netns/cni-d2ef64cf-cdb2-a542-c542-1b1ae76980ab" Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.478 [INFO][4822] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" iface="eth0" netns="/var/run/netns/cni-d2ef64cf-cdb2-a542-c542-1b1ae76980ab" Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.478 [INFO][4822] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.479 [INFO][4822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.533 [INFO][4829] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" HandleID="k8s-pod-network.f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Workload="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.533 [INFO][4829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.533 [INFO][4829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.546 [WARNING][4829] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" HandleID="k8s-pod-network.f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Workload="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.546 [INFO][4829] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" HandleID="k8s-pod-network.f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Workload="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.548 [INFO][4829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:45:55.555753 containerd[2017]: 2025-04-30 00:45:55.553 [INFO][4822] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:45:55.558823 containerd[2017]: time="2025-04-30T00:45:55.555967467Z" level=info msg="TearDown network for sandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\" successfully" Apr 30 00:45:55.558823 containerd[2017]: time="2025-04-30T00:45:55.556009059Z" level=info msg="StopPodSandbox for \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\" returns successfully" Apr 30 00:45:55.561554 containerd[2017]: time="2025-04-30T00:45:55.561377535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8cg7x,Uid:eb16121b-e738-4b90-8cde-fe4b2beb11f1,Namespace:calico-system,Attempt:1,}" Apr 30 00:45:55.563185 systemd[1]: run-netns-cni\x2dd2ef64cf\x2dcdb2\x2da542\x2dc542\x2d1b1ae76980ab.mount: Deactivated successfully. Apr 30 00:45:55.855256 systemd-networkd[1933]: calia759c9b5b7a: Link UP Apr 30 00:45:55.855686 systemd-networkd[1933]: calia759c9b5b7a: Gained carrier Apr 30 00:45:55.862140 (udev-worker)[4861]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.639 [INFO][4839] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.671 [INFO][4839] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0 csi-node-driver- calico-system eb16121b-e738-4b90-8cde-fe4b2beb11f1 765 0 2025-04-30 00:45:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-25-63 csi-node-driver-8cg7x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia759c9b5b7a [] []}} ContainerID="8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" Namespace="calico-system" Pod="csi-node-driver-8cg7x" WorkloadEndpoint="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-" Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.671 [INFO][4839] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" Namespace="calico-system" Pod="csi-node-driver-8cg7x" WorkloadEndpoint="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.772 [INFO][4851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" HandleID="k8s-pod-network.8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" Workload="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.790 [INFO][4851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" HandleID="k8s-pod-network.8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" Workload="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c540), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-63", "pod":"csi-node-driver-8cg7x", "timestamp":"2025-04-30 00:45:55.772432372 +0000 UTC"}, Hostname:"ip-172-31-25-63", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.790 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.790 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.790 [INFO][4851] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-63' Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.793 [INFO][4851] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" host="ip-172-31-25-63" Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.800 [INFO][4851] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-63" Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.808 [INFO][4851] ipam/ipam.go 489: Trying affinity for 192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.812 [INFO][4851] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.816 [INFO][4851] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.816 [INFO][4851] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.64/26 handle="k8s-pod-network.8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" host="ip-172-31-25-63" Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.818 [INFO][4851] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272 Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.827 [INFO][4851] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.64/26 handle="k8s-pod-network.8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" host="ip-172-31-25-63" Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.837 [INFO][4851] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.65/26] block=192.168.107.64/26 handle="k8s-pod-network.8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" host="ip-172-31-25-63" Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.838 [INFO][4851] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.65/26] handle="k8s-pod-network.8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" host="ip-172-31-25-63" Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.838 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:45:55.894873 containerd[2017]: 2025-04-30 00:45:55.838 [INFO][4851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.65/26] IPv6=[] ContainerID="8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" HandleID="k8s-pod-network.8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" Workload="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:45:55.899321 containerd[2017]: 2025-04-30 00:45:55.842 [INFO][4839] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" Namespace="calico-system" Pod="csi-node-driver-8cg7x" WorkloadEndpoint="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb16121b-e738-4b90-8cde-fe4b2beb11f1", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"", Pod:"csi-node-driver-8cg7x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia759c9b5b7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:45:55.899321 containerd[2017]: 2025-04-30 00:45:55.842 [INFO][4839] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.65/32] ContainerID="8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" Namespace="calico-system" Pod="csi-node-driver-8cg7x" WorkloadEndpoint="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:45:55.899321 containerd[2017]: 2025-04-30 00:45:55.842 [INFO][4839] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia759c9b5b7a ContainerID="8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" Namespace="calico-system" Pod="csi-node-driver-8cg7x" WorkloadEndpoint="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:45:55.899321 containerd[2017]: 2025-04-30 00:45:55.860 [INFO][4839] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" Namespace="calico-system" Pod="csi-node-driver-8cg7x" WorkloadEndpoint="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:45:55.899321 containerd[2017]: 2025-04-30 00:45:55.861 [INFO][4839] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" Namespace="calico-system" Pod="csi-node-driver-8cg7x" WorkloadEndpoint="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb16121b-e738-4b90-8cde-fe4b2beb11f1", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272", Pod:"csi-node-driver-8cg7x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia759c9b5b7a", MAC:"56:2a:7b:f5:69:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:45:55.899321 containerd[2017]: 2025-04-30 00:45:55.889 [INFO][4839] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272" Namespace="calico-system" Pod="csi-node-driver-8cg7x" WorkloadEndpoint="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:45:55.950644 containerd[2017]: time="2025-04-30T00:45:55.950445016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:55.951549 containerd[2017]: time="2025-04-30T00:45:55.951454444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:55.951695 containerd[2017]: time="2025-04-30T00:45:55.951585496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:55.951985 containerd[2017]: time="2025-04-30T00:45:55.951918436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:55.998443 systemd[1]: Started cri-containerd-8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272.scope - libcontainer container 8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272. Apr 30 00:45:56.060707 containerd[2017]: time="2025-04-30T00:45:56.060637441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8cg7x,Uid:eb16121b-e738-4b90-8cde-fe4b2beb11f1,Namespace:calico-system,Attempt:1,} returns sandbox id \"8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272\"" Apr 30 00:45:56.066452 containerd[2017]: time="2025-04-30T00:45:56.066351073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 00:45:57.333945 containerd[2017]: time="2025-04-30T00:45:57.332876463Z" level=info msg="StopPodSandbox for \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\"" Apr 30 00:45:57.334567 containerd[2017]: time="2025-04-30T00:45:57.334213623Z" level=info msg="StopPodSandbox for \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\"" Apr 30 00:45:57.586924 containerd[2017]: time="2025-04-30T00:45:57.586733309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:57.594417 containerd[2017]: time="2025-04-30T00:45:57.591018845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" Apr 30 00:45:57.599493 containerd[2017]: time="2025-04-30T00:45:57.599420105Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:57.651940 containerd[2017]: time="2025-04-30T00:45:57.651808253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:57.657525 containerd[2017]: time="2025-04-30T00:45:57.656099333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.589647592s" Apr 30 00:45:57.659766 containerd[2017]: time="2025-04-30T00:45:57.657527189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" Apr 30 00:45:57.688856 containerd[2017]: time="2025-04-30T00:45:57.688798013Z" level=info msg="CreateContainer within sandbox \"8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 00:45:57.752591 systemd-networkd[1933]: calia759c9b5b7a: Gained IPv6LL Apr 30 00:45:57.755323 containerd[2017]: time="2025-04-30T00:45:57.754444145Z" level=info msg="CreateContainer within sandbox \"8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4778dd5722a0631a412de2d6cadae3102b1a5764535a588f69f6e32959626152\"" Apr 30 00:45:57.764080 containerd[2017]: time="2025-04-30T00:45:57.763783505Z" level=info msg="StartContainer for \"4778dd5722a0631a412de2d6cadae3102b1a5764535a588f69f6e32959626152\"" Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.575 [INFO][4974] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.576 [INFO][4974] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" iface="eth0" netns="/var/run/netns/cni-b89d3771-1741-873e-0c11-f4c2785eb66c" Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.578 [INFO][4974] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" iface="eth0" netns="/var/run/netns/cni-b89d3771-1741-873e-0c11-f4c2785eb66c" Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.579 [INFO][4974] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" iface="eth0" netns="/var/run/netns/cni-b89d3771-1741-873e-0c11-f4c2785eb66c" Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.579 [INFO][4974] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.579 [INFO][4974] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.768 [INFO][4994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" HandleID="k8s-pod-network.c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.770 [INFO][4994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.770 [INFO][4994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.817 [WARNING][4994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" HandleID="k8s-pod-network.c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.817 [INFO][4994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" HandleID="k8s-pod-network.c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.827 [INFO][4994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:45:57.853592 containerd[2017]: 2025-04-30 00:45:57.846 [INFO][4974] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:45:57.860263 containerd[2017]: time="2025-04-30T00:45:57.856749930Z" level=info msg="TearDown network for sandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\" successfully" Apr 30 00:45:57.860263 containerd[2017]: time="2025-04-30T00:45:57.856809738Z" level=info msg="StopPodSandbox for \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\" returns successfully" Apr 30 00:45:57.862734 containerd[2017]: time="2025-04-30T00:45:57.861017142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685b87fb46-6z5zg,Uid:91b11be6-074e-42d1-997b-badf85674f33,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:45:57.864851 systemd[1]: run-netns-cni\x2db89d3771\x2d1741\x2d873e\x2d0c11\x2df4c2785eb66c.mount: Deactivated successfully. Apr 30 00:45:57.937794 systemd[1]: Started cri-containerd-4778dd5722a0631a412de2d6cadae3102b1a5764535a588f69f6e32959626152.scope - libcontainer container 4778dd5722a0631a412de2d6cadae3102b1a5764535a588f69f6e32959626152. Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.617 [INFO][4966] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.617 [INFO][4966] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" iface="eth0" netns="/var/run/netns/cni-ca8b2440-85dd-07c1-b07a-8a5409dd8082" Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.617 [INFO][4966] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" iface="eth0" netns="/var/run/netns/cni-ca8b2440-85dd-07c1-b07a-8a5409dd8082" Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.622 [INFO][4966] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" iface="eth0" netns="/var/run/netns/cni-ca8b2440-85dd-07c1-b07a-8a5409dd8082" Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.623 [INFO][4966] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.623 [INFO][4966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.847 [INFO][5000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" HandleID="k8s-pod-network.85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.847 [INFO][5000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.847 [INFO][5000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.898 [WARNING][5000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" HandleID="k8s-pod-network.85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.898 [INFO][5000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" HandleID="k8s-pod-network.85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.912 [INFO][5000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:45:57.951445 containerd[2017]: 2025-04-30 00:45:57.939 [INFO][4966] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:45:57.955799 containerd[2017]: time="2025-04-30T00:45:57.955504758Z" level=info msg="TearDown network for sandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\" successfully" Apr 30 00:45:57.955799 containerd[2017]: time="2025-04-30T00:45:57.955574046Z" level=info msg="StopPodSandbox for \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\" returns successfully" Apr 30 00:45:57.959611 containerd[2017]: time="2025-04-30T00:45:57.958417830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cgtrr,Uid:03ba2d14-056d-41e7-b778-1e52ace77241,Namespace:kube-system,Attempt:1,}" Apr 30 00:45:58.076004 containerd[2017]: time="2025-04-30T00:45:58.075938007Z" level=info msg="StartContainer for \"4778dd5722a0631a412de2d6cadae3102b1a5764535a588f69f6e32959626152\" returns successfully" Apr 30 00:45:58.078818 containerd[2017]: time="2025-04-30T00:45:58.078732579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 00:45:58.081811 kubelet[3213]: I0430 00:45:58.081743 3213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:45:58.319902 (udev-worker)[4860]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:45:58.325528 systemd-networkd[1933]: cali6f6c8d0a07d: Link UP Apr 30 00:45:58.329277 systemd-networkd[1933]: cali6f6c8d0a07d: Gained carrier Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.061 [INFO][5046] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.106 [INFO][5046] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0 coredns-668d6bf9bc- kube-system 03ba2d14-056d-41e7-b778-1e52ace77241 778 0 2025-04-30 00:45:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-63 coredns-668d6bf9bc-cgtrr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f6c8d0a07d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgtrr" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-" Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.106 [INFO][5046] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgtrr" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.227 [INFO][5076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" HandleID="k8s-pod-network.d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.252 [INFO][5076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" HandleID="k8s-pod-network.d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029fab0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-63", "pod":"coredns-668d6bf9bc-cgtrr", "timestamp":"2025-04-30 00:45:58.226723636 +0000 UTC"}, Hostname:"ip-172-31-25-63", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.252 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.253 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.253 [INFO][5076] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-63' Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.259 [INFO][5076] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" host="ip-172-31-25-63" Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.267 [INFO][5076] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-63" Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.277 [INFO][5076] ipam/ipam.go 489: Trying affinity for 192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.282 [INFO][5076] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.289 [INFO][5076] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.289 [INFO][5076] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.64/26 handle="k8s-pod-network.d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" host="ip-172-31-25-63" Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.292 [INFO][5076] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1 Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.301 [INFO][5076] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.64/26 handle="k8s-pod-network.d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" host="ip-172-31-25-63" Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.310 [INFO][5076] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.66/26] block=192.168.107.64/26 handle="k8s-pod-network.d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" host="ip-172-31-25-63" Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.311 [INFO][5076] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.66/26] handle="k8s-pod-network.d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" host="ip-172-31-25-63" Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.311 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:45:58.371853 containerd[2017]: 2025-04-30 00:45:58.311 [INFO][5076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.66/26] IPv6=[] ContainerID="d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" HandleID="k8s-pod-network.d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:45:58.376415 containerd[2017]: 2025-04-30 00:45:58.317 [INFO][5046] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgtrr" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"03ba2d14-056d-41e7-b778-1e52ace77241", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"", Pod:"coredns-668d6bf9bc-cgtrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f6c8d0a07d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:45:58.376415 containerd[2017]: 2025-04-30 00:45:58.317 [INFO][5046] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.66/32] ContainerID="d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgtrr" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:45:58.376415 containerd[2017]: 2025-04-30 00:45:58.317 [INFO][5046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f6c8d0a07d ContainerID="d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgtrr" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:45:58.376415 containerd[2017]: 2025-04-30 00:45:58.321 [INFO][5046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgtrr" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:45:58.376415 containerd[2017]: 2025-04-30 00:45:58.324 [INFO][5046] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgtrr" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"03ba2d14-056d-41e7-b778-1e52ace77241", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1", Pod:"coredns-668d6bf9bc-cgtrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f6c8d0a07d", MAC:"f6:87:49:fd:1b:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:45:58.376415 containerd[2017]: 2025-04-30 00:45:58.362 [INFO][5046] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1" Namespace="kube-system" Pod="coredns-668d6bf9bc-cgtrr" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:45:58.461732 containerd[2017]: time="2025-04-30T00:45:58.461210513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:58.464396 containerd[2017]: time="2025-04-30T00:45:58.462098525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:58.464396 containerd[2017]: time="2025-04-30T00:45:58.462166949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:58.464707 containerd[2017]: time="2025-04-30T00:45:58.464435993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:58.492665 systemd-networkd[1933]: caliac6d742a5df: Link UP Apr 30 00:45:58.497468 systemd-networkd[1933]: caliac6d742a5df: Gained carrier Apr 30 00:45:58.529421 systemd[1]: Started cri-containerd-d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1.scope - libcontainer container d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1. Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.048 [INFO][5031] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.096 [INFO][5031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0 calico-apiserver-685b87fb46- calico-apiserver 91b11be6-074e-42d1-997b-badf85674f33 776 0 2025-04-30 00:45:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:685b87fb46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-63 calico-apiserver-685b87fb46-6z5zg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliac6d742a5df [] []}} ContainerID="4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-6z5zg" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-" Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.096 [INFO][5031] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-6z5zg" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.209 [INFO][5071] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" HandleID="k8s-pod-network.4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.252 [INFO][5071] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" HandleID="k8s-pod-network.4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cb20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-63", "pod":"calico-apiserver-685b87fb46-6z5zg", "timestamp":"2025-04-30 00:45:58.209826304 +0000 UTC"}, Hostname:"ip-172-31-25-63", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.253 [INFO][5071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.316 [INFO][5071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.316 [INFO][5071] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-63' Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.360 [INFO][5071] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" host="ip-172-31-25-63" Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.374 [INFO][5071] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-63" Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.386 [INFO][5071] ipam/ipam.go 489: Trying affinity for 192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.393 [INFO][5071] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.400 [INFO][5071] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.400 [INFO][5071] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.64/26 handle="k8s-pod-network.4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" host="ip-172-31-25-63" Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.405 [INFO][5071] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.438 [INFO][5071] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.64/26 handle="k8s-pod-network.4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" host="ip-172-31-25-63" Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.466 [INFO][5071] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.67/26] block=192.168.107.64/26 handle="k8s-pod-network.4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" host="ip-172-31-25-63" Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.468 [INFO][5071] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.67/26] handle="k8s-pod-network.4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" host="ip-172-31-25-63" Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.468 [INFO][5071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:45:58.545248 containerd[2017]: 2025-04-30 00:45:58.468 [INFO][5071] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.67/26] IPv6=[] ContainerID="4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" HandleID="k8s-pod-network.4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:45:58.549022 containerd[2017]: 2025-04-30 00:45:58.482 [INFO][5031] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-6z5zg" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0", GenerateName:"calico-apiserver-685b87fb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"91b11be6-074e-42d1-997b-badf85674f33", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685b87fb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"", Pod:"calico-apiserver-685b87fb46-6z5zg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac6d742a5df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:45:58.549022 containerd[2017]: 2025-04-30 00:45:58.482 [INFO][5031] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.67/32] ContainerID="4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-6z5zg" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:45:58.549022 containerd[2017]: 2025-04-30 00:45:58.482 [INFO][5031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac6d742a5df ContainerID="4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-6z5zg" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:45:58.549022 containerd[2017]: 2025-04-30 00:45:58.490 [INFO][5031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-6z5zg" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:45:58.549022 containerd[2017]: 2025-04-30 00:45:58.498 [INFO][5031] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-6z5zg" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0", GenerateName:"calico-apiserver-685b87fb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"91b11be6-074e-42d1-997b-badf85674f33", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685b87fb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab", Pod:"calico-apiserver-685b87fb46-6z5zg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac6d742a5df", MAC:"6e:75:16:4c:58:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:45:58.549022 containerd[2017]: 2025-04-30 00:45:58.537 [INFO][5031] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-6z5zg" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:45:58.644154 containerd[2017]: time="2025-04-30T00:45:58.643328226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:58.644154 containerd[2017]: time="2025-04-30T00:45:58.643493058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:58.644154 containerd[2017]: time="2025-04-30T00:45:58.643533054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:58.644154 containerd[2017]: time="2025-04-30T00:45:58.643779858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:58.699136 containerd[2017]: time="2025-04-30T00:45:58.697953210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cgtrr,Uid:03ba2d14-056d-41e7-b778-1e52ace77241,Namespace:kube-system,Attempt:1,} returns sandbox id \"d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1\"" Apr 30 00:45:58.716255 containerd[2017]: time="2025-04-30T00:45:58.716187222Z" level=info msg="CreateContainer within sandbox \"d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:45:58.738841 systemd[1]: Started cri-containerd-4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab.scope - libcontainer container 4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab. Apr 30 00:45:58.774395 systemd[1]: run-netns-cni\x2dca8b2440\x2d85dd\x2d07c1\x2db07a\x2d8a5409dd8082.mount: Deactivated successfully. Apr 30 00:45:58.806143 containerd[2017]: time="2025-04-30T00:45:58.806055739Z" level=info msg="CreateContainer within sandbox \"d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7648faaf4508d032f549c4ce90ae2cffcfe683c5da20be1893d5919efba4e5ef\"" Apr 30 00:45:58.810477 containerd[2017]: time="2025-04-30T00:45:58.809088475Z" level=info msg="StartContainer for \"7648faaf4508d032f549c4ce90ae2cffcfe683c5da20be1893d5919efba4e5ef\"" Apr 30 00:45:58.901725 systemd[1]: Started cri-containerd-7648faaf4508d032f549c4ce90ae2cffcfe683c5da20be1893d5919efba4e5ef.scope - libcontainer container 7648faaf4508d032f549c4ce90ae2cffcfe683c5da20be1893d5919efba4e5ef. Apr 30 00:45:58.983894 kernel: bpftool[5239]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 00:45:58.984033 containerd[2017]: time="2025-04-30T00:45:58.983735852Z" level=info msg="StartContainer for \"7648faaf4508d032f549c4ce90ae2cffcfe683c5da20be1893d5919efba4e5ef\" returns successfully" Apr 30 00:45:59.026512 containerd[2017]: time="2025-04-30T00:45:59.026426860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685b87fb46-6z5zg,Uid:91b11be6-074e-42d1-997b-badf85674f33,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab\"" Apr 30 00:45:59.343333 containerd[2017]: time="2025-04-30T00:45:59.341630333Z" level=info msg="StopPodSandbox for \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\"" Apr 30 00:45:59.480756 systemd-networkd[1933]: cali6f6c8d0a07d: Gained IPv6LL Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.524 [INFO][5285] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.525 [INFO][5285] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" iface="eth0" netns="/var/run/netns/cni-e44339f7-2222-bc24-ab97-3290f93a949b" Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.526 [INFO][5285] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" iface="eth0" netns="/var/run/netns/cni-e44339f7-2222-bc24-ab97-3290f93a949b" Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.527 [INFO][5285] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" iface="eth0" netns="/var/run/netns/cni-e44339f7-2222-bc24-ab97-3290f93a949b" Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.527 [INFO][5285] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.527 [INFO][5285] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.597 [INFO][5304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" HandleID="k8s-pod-network.9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.597 [INFO][5304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.598 [INFO][5304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.624 [WARNING][5304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" HandleID="k8s-pod-network.9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.624 [INFO][5304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" HandleID="k8s-pod-network.9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.628 [INFO][5304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:45:59.638671 containerd[2017]: 2025-04-30 00:45:59.632 [INFO][5285] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:45:59.638671 containerd[2017]: time="2025-04-30T00:45:59.638039335Z" level=info msg="TearDown network for sandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\" successfully" Apr 30 00:45:59.638671 containerd[2017]: time="2025-04-30T00:45:59.638082271Z" level=info msg="StopPodSandbox for \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\" returns successfully" Apr 30 00:45:59.641068 containerd[2017]: time="2025-04-30T00:45:59.641001811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2wbvz,Uid:9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1,Namespace:kube-system,Attempt:1,}" Apr 30 00:45:59.651452 systemd[1]: run-netns-cni\x2de44339f7\x2d2222\x2dbc24\x2dab97\x2d3290f93a949b.mount: Deactivated successfully. Apr 30 00:45:59.788518 kubelet[3213]: I0430 00:45:59.788378 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cgtrr" podStartSLOduration=38.788353652 podStartE2EDuration="38.788353652s" podCreationTimestamp="2025-04-30 00:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:45:59.743860255 +0000 UTC m=+44.703397771" watchObservedRunningTime="2025-04-30 00:45:59.788353652 +0000 UTC m=+44.747891156" Apr 30 00:46:00.069727 systemd-networkd[1933]: cali80aeefc76bc: Link UP Apr 30 00:46:00.073535 systemd-networkd[1933]: cali80aeefc76bc: Gained carrier Apr 30 00:46:00.096400 systemd-networkd[1933]: vxlan.calico: Link UP Apr 30 00:46:00.096420 systemd-networkd[1933]: vxlan.calico: Gained carrier Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:45:59.814 [INFO][5324] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0 coredns-668d6bf9bc- kube-system 9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1 802 0 2025-04-30 00:45:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-63 coredns-668d6bf9bc-2wbvz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali80aeefc76bc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" Namespace="kube-system" Pod="coredns-668d6bf9bc-2wbvz" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-" Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:45:59.814 [INFO][5324] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" Namespace="kube-system" Pod="coredns-668d6bf9bc-2wbvz" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:45:59.938 [INFO][5341] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" HandleID="k8s-pod-network.b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:45:59.975 [INFO][5341] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" HandleID="k8s-pod-network.b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029b660), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-63", "pod":"coredns-668d6bf9bc-2wbvz", "timestamp":"2025-04-30 00:45:59.938384348 +0000 UTC"}, Hostname:"ip-172-31-25-63", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:45:59.975 [INFO][5341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:45:59.975 [INFO][5341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:45:59.975 [INFO][5341] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-63' Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:45:59.991 [INFO][5341] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" host="ip-172-31-25-63" Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:45:59.998 [INFO][5341] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-63" Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:46:00.006 [INFO][5341] ipam/ipam.go 489: Trying affinity for 192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:46:00.010 [INFO][5341] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:46:00.017 [INFO][5341] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:46:00.017 [INFO][5341] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.64/26 handle="k8s-pod-network.b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" host="ip-172-31-25-63" Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:46:00.022 [INFO][5341] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179 Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:46:00.032 [INFO][5341] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.64/26 handle="k8s-pod-network.b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" host="ip-172-31-25-63" Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:46:00.053 [INFO][5341] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.68/26] block=192.168.107.64/26 handle="k8s-pod-network.b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" host="ip-172-31-25-63" Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:46:00.053 [INFO][5341] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.68/26] handle="k8s-pod-network.b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" host="ip-172-31-25-63" Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:46:00.053 [INFO][5341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:00.113174 containerd[2017]: 2025-04-30 00:46:00.053 [INFO][5341] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.68/26] IPv6=[] ContainerID="b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" HandleID="k8s-pod-network.b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:00.115066 containerd[2017]: 2025-04-30 00:46:00.061 [INFO][5324] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" Namespace="kube-system" Pod="coredns-668d6bf9bc-2wbvz" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"", Pod:"coredns-668d6bf9bc-2wbvz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80aeefc76bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:00.115066 containerd[2017]: 2025-04-30 00:46:00.062 [INFO][5324] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.68/32] ContainerID="b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" Namespace="kube-system" Pod="coredns-668d6bf9bc-2wbvz" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:00.115066 containerd[2017]: 2025-04-30 00:46:00.062 [INFO][5324] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80aeefc76bc ContainerID="b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" Namespace="kube-system" Pod="coredns-668d6bf9bc-2wbvz" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:00.115066 containerd[2017]: 2025-04-30 00:46:00.080 [INFO][5324] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" Namespace="kube-system" Pod="coredns-668d6bf9bc-2wbvz" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:00.115066 containerd[2017]: 2025-04-30 00:46:00.081 [INFO][5324] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" Namespace="kube-system" Pod="coredns-668d6bf9bc-2wbvz" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179", Pod:"coredns-668d6bf9bc-2wbvz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80aeefc76bc", MAC:"02:bc:3c:15:93:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:00.115066 containerd[2017]: 2025-04-30 00:46:00.107 [INFO][5324] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179" Namespace="kube-system" Pod="coredns-668d6bf9bc-2wbvz" WorkloadEndpoint="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:00.119231 (udev-worker)[5352]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:46:00.239370 containerd[2017]: time="2025-04-30T00:46:00.234833262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:46:00.239370 containerd[2017]: time="2025-04-30T00:46:00.234911874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:46:00.239370 containerd[2017]: time="2025-04-30T00:46:00.234936342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:46:00.239370 containerd[2017]: time="2025-04-30T00:46:00.235073646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:46:00.296459 systemd[1]: Started cri-containerd-b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179.scope - libcontainer container b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179. Apr 30 00:46:00.334140 containerd[2017]: time="2025-04-30T00:46:00.333269046Z" level=info msg="StopPodSandbox for \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\"" Apr 30 00:46:00.334817 containerd[2017]: time="2025-04-30T00:46:00.334041630Z" level=info msg="StopPodSandbox for \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\"" Apr 30 00:46:00.377731 systemd-networkd[1933]: caliac6d742a5df: Gained IPv6LL Apr 30 00:46:00.515228 containerd[2017]: time="2025-04-30T00:46:00.515076547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2wbvz,Uid:9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1,Namespace:kube-system,Attempt:1,} returns sandbox id \"b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179\"" Apr 30 00:46:00.529489 containerd[2017]: time="2025-04-30T00:46:00.529050391Z" level=info msg="CreateContainer within sandbox \"b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:46:00.630549 containerd[2017]: time="2025-04-30T00:46:00.628580588Z" level=info msg="CreateContainer within sandbox \"b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6cb142ba496488314ec1def52298c098c5e653e664d519c135ed9ec5b50bd664\"" Apr 30 00:46:00.635195 containerd[2017]: time="2025-04-30T00:46:00.634799336Z" level=info msg="StartContainer for \"6cb142ba496488314ec1def52298c098c5e653e664d519c135ed9ec5b50bd664\"" Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.568 [INFO][5440] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.568 [INFO][5440] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" iface="eth0" netns="/var/run/netns/cni-6933128f-3e6e-c435-5455-09146b48ae2a" Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.579 [INFO][5440] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" iface="eth0" netns="/var/run/netns/cni-6933128f-3e6e-c435-5455-09146b48ae2a" Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.579 [INFO][5440] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" iface="eth0" netns="/var/run/netns/cni-6933128f-3e6e-c435-5455-09146b48ae2a" Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.579 [INFO][5440] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.579 [INFO][5440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.748 [INFO][5460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" HandleID="k8s-pod-network.26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.750 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.750 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.794 [WARNING][5460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" HandleID="k8s-pod-network.26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.795 [INFO][5460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" HandleID="k8s-pod-network.26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.801 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:00.827772 containerd[2017]: 2025-04-30 00:46:00.817 [INFO][5440] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:46:00.838505 containerd[2017]: time="2025-04-30T00:46:00.833188029Z" level=info msg="TearDown network for sandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\" successfully" Apr 30 00:46:00.838505 containerd[2017]: time="2025-04-30T00:46:00.833280957Z" level=info msg="StopPodSandbox for \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\" returns successfully" Apr 30 00:46:00.844189 containerd[2017]: time="2025-04-30T00:46:00.840782457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685b87fb46-zcfxw,Uid:d404a0b8-b738-4be9-a74e-856f784c975d,Namespace:calico-apiserver,Attempt:1,}" Apr 30 00:46:00.843365 systemd[1]: run-netns-cni\x2d6933128f\x2d3e6e\x2dc435\x2d5455\x2d09146b48ae2a.mount: Deactivated successfully. Apr 30 00:46:00.889215 systemd[1]: Started cri-containerd-6cb142ba496488314ec1def52298c098c5e653e664d519c135ed9ec5b50bd664.scope - libcontainer container 6cb142ba496488314ec1def52298c098c5e653e664d519c135ed9ec5b50bd664. Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.696 [INFO][5439] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.696 [INFO][5439] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" iface="eth0" netns="/var/run/netns/cni-5606fdfd-7d4c-98d0-d633-382e683f8498" Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.697 [INFO][5439] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" iface="eth0" netns="/var/run/netns/cni-5606fdfd-7d4c-98d0-d633-382e683f8498" Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.698 [INFO][5439] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" iface="eth0" netns="/var/run/netns/cni-5606fdfd-7d4c-98d0-d633-382e683f8498" Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.698 [INFO][5439] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.698 [INFO][5439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.816 [INFO][5476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" HandleID="k8s-pod-network.c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Workload="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.816 [INFO][5476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.816 [INFO][5476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.878 [WARNING][5476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" HandleID="k8s-pod-network.c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Workload="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.878 [INFO][5476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" HandleID="k8s-pod-network.c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Workload="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.885 [INFO][5476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:00.913651 containerd[2017]: 2025-04-30 00:46:00.898 [INFO][5439] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:46:00.919208 containerd[2017]: time="2025-04-30T00:46:00.918988989Z" level=info msg="TearDown network for sandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\" successfully" Apr 30 00:46:00.919208 containerd[2017]: time="2025-04-30T00:46:00.919034205Z" level=info msg="StopPodSandbox for \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\" returns successfully" Apr 30 00:46:00.934223 containerd[2017]: time="2025-04-30T00:46:00.932441277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfc54cc8f-c7qvh,Uid:da044bc6-8690-4c3f-9ab1-4db58205c9ed,Namespace:calico-system,Attempt:1,}" Apr 30 00:46:00.939163 systemd[1]: run-netns-cni\x2d5606fdfd\x2d7d4c\x2d98d0\x2dd633\x2d382e683f8498.mount: Deactivated successfully. Apr 30 00:46:01.101204 containerd[2017]: time="2025-04-30T00:46:01.098936898Z" level=info msg="StartContainer for \"6cb142ba496488314ec1def52298c098c5e653e664d519c135ed9ec5b50bd664\" returns successfully" Apr 30 00:46:01.272923 systemd-networkd[1933]: cali80aeefc76bc: Gained IPv6LL Apr 30 00:46:01.273392 systemd-networkd[1933]: vxlan.calico: Gained IPv6LL Apr 30 00:46:01.643640 (udev-worker)[5391]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:46:01.654995 systemd-networkd[1933]: cali309d4b24323: Link UP Apr 30 00:46:01.663867 systemd-networkd[1933]: cali309d4b24323: Gained carrier Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.240 [INFO][5510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0 calico-apiserver-685b87fb46- calico-apiserver d404a0b8-b738-4be9-a74e-856f784c975d 820 0 2025-04-30 00:45:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:685b87fb46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-63 calico-apiserver-685b87fb46-zcfxw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali309d4b24323 [] []}} ContainerID="6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-zcfxw" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-" Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.240 [INFO][5510] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-zcfxw" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.471 [INFO][5577] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" HandleID="k8s-pod-network.6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.540 [INFO][5577] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" HandleID="k8s-pod-network.6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400048c450), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-63", "pod":"calico-apiserver-685b87fb46-zcfxw", "timestamp":"2025-04-30 00:46:01.471502868 +0000 UTC"}, Hostname:"ip-172-31-25-63", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.541 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.541 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.541 [INFO][5577] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-63' Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.546 [INFO][5577] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" host="ip-172-31-25-63" Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.559 [INFO][5577] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-63" Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.573 [INFO][5577] ipam/ipam.go 489: Trying affinity for 192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.581 [INFO][5577] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.588 [INFO][5577] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.588 [INFO][5577] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.64/26 handle="k8s-pod-network.6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" host="ip-172-31-25-63" Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.593 [INFO][5577] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86 Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.606 [INFO][5577] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.64/26 handle="k8s-pod-network.6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" host="ip-172-31-25-63" Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.625 [INFO][5577] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.69/26] block=192.168.107.64/26 handle="k8s-pod-network.6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" host="ip-172-31-25-63" Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.625 [INFO][5577] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.69/26] handle="k8s-pod-network.6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" host="ip-172-31-25-63" Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.626 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:01.723696 containerd[2017]: 2025-04-30 00:46:01.626 [INFO][5577] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.69/26] IPv6=[] ContainerID="6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" HandleID="k8s-pod-network.6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:01.727386 containerd[2017]: 2025-04-30 00:46:01.634 [INFO][5510] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-zcfxw" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0", GenerateName:"calico-apiserver-685b87fb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"d404a0b8-b738-4be9-a74e-856f784c975d", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685b87fb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"", Pod:"calico-apiserver-685b87fb46-zcfxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali309d4b24323", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:01.727386 containerd[2017]: 2025-04-30 00:46:01.636 [INFO][5510] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.69/32] ContainerID="6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-zcfxw" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:01.727386 containerd[2017]: 2025-04-30 00:46:01.636 [INFO][5510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali309d4b24323 ContainerID="6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-zcfxw" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:01.727386 containerd[2017]: 2025-04-30 00:46:01.671 [INFO][5510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-zcfxw" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:01.727386 containerd[2017]: 2025-04-30 00:46:01.674 [INFO][5510] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-zcfxw" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0", GenerateName:"calico-apiserver-685b87fb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"d404a0b8-b738-4be9-a74e-856f784c975d", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685b87fb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86", Pod:"calico-apiserver-685b87fb46-zcfxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali309d4b24323", MAC:"46:29:e1:b5:c8:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:01.727386 containerd[2017]: 2025-04-30 00:46:01.709 [INFO][5510] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86" Namespace="calico-apiserver" Pod="calico-apiserver-685b87fb46-zcfxw" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:01.809739 systemd-networkd[1933]: cali782318dfcc0: Link UP Apr 30 00:46:01.817912 systemd-networkd[1933]: cali782318dfcc0: Gained carrier Apr 30 00:46:01.863866 kubelet[3213]: I0430 00:46:01.863655 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2wbvz" podStartSLOduration=40.863507662 podStartE2EDuration="40.863507662s" podCreationTimestamp="2025-04-30 00:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:46:01.862864186 +0000 UTC m=+46.822401714" watchObservedRunningTime="2025-04-30 00:46:01.863507662 +0000 UTC m=+46.823045178" Apr 30 00:46:01.868159 containerd[2017]: time="2025-04-30T00:46:01.867874282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:46:01.873374 containerd[2017]: time="2025-04-30T00:46:01.873230422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" Apr 30 00:46:01.874564 containerd[2017]: time="2025-04-30T00:46:01.872890066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:46:01.874564 containerd[2017]: time="2025-04-30T00:46:01.874176154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:46:01.874564 containerd[2017]: time="2025-04-30T00:46:01.874205170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:46:01.874564 containerd[2017]: time="2025-04-30T00:46:01.874370446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:46:01.877467 containerd[2017]: time="2025-04-30T00:46:01.877340182Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.203 [INFO][5527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0 calico-kube-controllers-7bfc54cc8f- calico-system da044bc6-8690-4c3f-9ab1-4db58205c9ed 822 0 2025-04-30 00:45:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bfc54cc8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-25-63 calico-kube-controllers-7bfc54cc8f-c7qvh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali782318dfcc0 [] []}} ContainerID="eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" Namespace="calico-system" Pod="calico-kube-controllers-7bfc54cc8f-c7qvh" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.204 [INFO][5527] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" Namespace="calico-system" Pod="calico-kube-controllers-7bfc54cc8f-c7qvh" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.499 [INFO][5568] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" HandleID="k8s-pod-network.eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" Workload="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.556 [INFO][5568] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" HandleID="k8s-pod-network.eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" Workload="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011b7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-63", "pod":"calico-kube-controllers-7bfc54cc8f-c7qvh", "timestamp":"2025-04-30 00:46:01.499498088 +0000 UTC"}, Hostname:"ip-172-31-25-63", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.557 [INFO][5568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.626 [INFO][5568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.626 [INFO][5568] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-63' Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.652 [INFO][5568] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" host="ip-172-31-25-63" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.676 [INFO][5568] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-63" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.693 [INFO][5568] ipam/ipam.go 489: Trying affinity for 192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.699 [INFO][5568] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.710 [INFO][5568] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.64/26 host="ip-172-31-25-63" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.711 [INFO][5568] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.64/26 handle="k8s-pod-network.eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" host="ip-172-31-25-63" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.718 [INFO][5568] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.734 [INFO][5568] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.64/26 handle="k8s-pod-network.eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" host="ip-172-31-25-63" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.758 [INFO][5568] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.70/26] block=192.168.107.64/26 handle="k8s-pod-network.eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" host="ip-172-31-25-63" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.758 [INFO][5568] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.70/26] handle="k8s-pod-network.eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" host="ip-172-31-25-63" Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.759 [INFO][5568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:01.914238 containerd[2017]: 2025-04-30 00:46:01.759 [INFO][5568] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.70/26] IPv6=[] ContainerID="eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" HandleID="k8s-pod-network.eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" Workload="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:01.916614 containerd[2017]: 2025-04-30 00:46:01.777 [INFO][5527] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" Namespace="calico-system" Pod="calico-kube-controllers-7bfc54cc8f-c7qvh" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0", GenerateName:"calico-kube-controllers-7bfc54cc8f-", Namespace:"calico-system", SelfLink:"", UID:"da044bc6-8690-4c3f-9ab1-4db58205c9ed", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfc54cc8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"", Pod:"calico-kube-controllers-7bfc54cc8f-c7qvh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali782318dfcc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:01.916614 containerd[2017]: 2025-04-30 00:46:01.778 [INFO][5527] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.70/32] ContainerID="eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" Namespace="calico-system" Pod="calico-kube-controllers-7bfc54cc8f-c7qvh" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:01.916614 containerd[2017]: 2025-04-30 00:46:01.778 [INFO][5527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali782318dfcc0 ContainerID="eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" Namespace="calico-system" Pod="calico-kube-controllers-7bfc54cc8f-c7qvh" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:01.916614 containerd[2017]: 2025-04-30 00:46:01.822 [INFO][5527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" Namespace="calico-system" Pod="calico-kube-controllers-7bfc54cc8f-c7qvh" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:01.916614 containerd[2017]: 2025-04-30 00:46:01.834 [INFO][5527] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" Namespace="calico-system" Pod="calico-kube-controllers-7bfc54cc8f-c7qvh" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0", GenerateName:"calico-kube-controllers-7bfc54cc8f-", Namespace:"calico-system", SelfLink:"", UID:"da044bc6-8690-4c3f-9ab1-4db58205c9ed", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfc54cc8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa", Pod:"calico-kube-controllers-7bfc54cc8f-c7qvh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali782318dfcc0", MAC:"26:09:51:4e:72:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:01.916614 containerd[2017]: 2025-04-30 00:46:01.901 [INFO][5527] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa" Namespace="calico-system" Pod="calico-kube-controllers-7bfc54cc8f-c7qvh" WorkloadEndpoint="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:01.941047 containerd[2017]: time="2025-04-30T00:46:01.938035450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:46:01.974802 systemd[1]: Started cri-containerd-6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86.scope - libcontainer container 6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86. Apr 30 00:46:01.982279 containerd[2017]: time="2025-04-30T00:46:01.982072666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 3.903215683s" Apr 30 00:46:01.982279 containerd[2017]: time="2025-04-30T00:46:01.982189594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" Apr 30 00:46:01.988916 containerd[2017]: time="2025-04-30T00:46:01.987701950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:46:01.996566 containerd[2017]: time="2025-04-30T00:46:01.994000198Z" level=info msg="CreateContainer within sandbox \"8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 00:46:02.061929 containerd[2017]: time="2025-04-30T00:46:02.060425731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:46:02.061929 containerd[2017]: time="2025-04-30T00:46:02.060528595Z" level=info msg="CreateContainer within sandbox \"8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cb507255db81885419675ce041133789a9aa437bd1b8ce84957f3c1391c59e91\"" Apr 30 00:46:02.062646 containerd[2017]: time="2025-04-30T00:46:02.060614143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:46:02.062646 containerd[2017]: time="2025-04-30T00:46:02.060658591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:46:02.062646 containerd[2017]: time="2025-04-30T00:46:02.061193119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:46:02.064558 containerd[2017]: time="2025-04-30T00:46:02.063804643Z" level=info msg="StartContainer for \"cb507255db81885419675ce041133789a9aa437bd1b8ce84957f3c1391c59e91\"" Apr 30 00:46:02.150519 systemd[1]: Started cri-containerd-eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa.scope - libcontainer container eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa. Apr 30 00:46:02.204253 systemd[1]: Started cri-containerd-cb507255db81885419675ce041133789a9aa437bd1b8ce84957f3c1391c59e91.scope - libcontainer container cb507255db81885419675ce041133789a9aa437bd1b8ce84957f3c1391c59e91. Apr 30 00:46:02.378453 containerd[2017]: time="2025-04-30T00:46:02.377781104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685b87fb46-zcfxw,Uid:d404a0b8-b738-4be9-a74e-856f784c975d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86\"" Apr 30 00:46:02.411219 containerd[2017]: time="2025-04-30T00:46:02.410932173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfc54cc8f-c7qvh,Uid:da044bc6-8690-4c3f-9ab1-4db58205c9ed,Namespace:calico-system,Attempt:1,} returns sandbox id \"eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa\"" Apr 30 00:46:02.426597 containerd[2017]: time="2025-04-30T00:46:02.423629769Z" level=info msg="StartContainer for \"cb507255db81885419675ce041133789a9aa437bd1b8ce84957f3c1391c59e91\" returns successfully" Apr 30 00:46:02.493736 kubelet[3213]: I0430 00:46:02.493449 3213 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 00:46:02.493736 kubelet[3213]: I0430 00:46:02.493499 3213 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 00:46:02.744428 systemd-networkd[1933]: cali309d4b24323: Gained IPv6LL Apr 30 00:46:03.640529 systemd-networkd[1933]: cali782318dfcc0: Gained IPv6LL Apr 30 00:46:05.721785 systemd[1]: Started sshd@7-172.31.25.63:22-147.75.109.163:53482.service - OpenSSH per-connection server daemon (147.75.109.163:53482). Apr 30 00:46:05.863770 ntpd[1987]: Listen normally on 8 vxlan.calico 192.168.107.64:123 Apr 30 00:46:05.863975 ntpd[1987]: Listen normally on 9 calia759c9b5b7a [fe80::ecee:eeff:feee:eeee%4]:123 Apr 30 00:46:05.864593 ntpd[1987]: 30 Apr 00:46:05 ntpd[1987]: Listen normally on 8 vxlan.calico 192.168.107.64:123 Apr 30 00:46:05.864593 ntpd[1987]: 30 Apr 00:46:05 ntpd[1987]: Listen normally on 9 calia759c9b5b7a [fe80::ecee:eeff:feee:eeee%4]:123 Apr 30 00:46:05.864593 ntpd[1987]: 30 Apr 00:46:05 ntpd[1987]: Listen normally on 10 cali6f6c8d0a07d [fe80::ecee:eeff:feee:eeee%5]:123 Apr 30 00:46:05.864593 ntpd[1987]: 30 Apr 00:46:05 ntpd[1987]: Listen normally on 11 caliac6d742a5df [fe80::ecee:eeff:feee:eeee%6]:123 Apr 30 00:46:05.864280 ntpd[1987]: Listen normally on 10 cali6f6c8d0a07d [fe80::ecee:eeff:feee:eeee%5]:123 Apr 30 00:46:05.864886 ntpd[1987]: 30 Apr 00:46:05 ntpd[1987]: Listen normally on 12 cali80aeefc76bc [fe80::ecee:eeff:feee:eeee%7]:123 Apr 30 00:46:05.864886 ntpd[1987]: 30 Apr 00:46:05 ntpd[1987]: Listen normally on 13 vxlan.calico [fe80::6491:4bff:fe51:af29%8]:123 Apr 30 00:46:05.864394 ntpd[1987]: Listen normally on 11 caliac6d742a5df [fe80::ecee:eeff:feee:eeee%6]:123 Apr 30 00:46:05.866973 ntpd[1987]: 30 Apr 00:46:05 ntpd[1987]: Listen normally on 14 cali309d4b24323 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 30 00:46:05.866973 ntpd[1987]: 30 Apr 00:46:05 ntpd[1987]: Listen normally on 15 cali782318dfcc0 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 30 00:46:05.864636 ntpd[1987]: Listen normally on 12 cali80aeefc76bc [fe80::ecee:eeff:feee:eeee%7]:123 Apr 30 00:46:05.864743 ntpd[1987]: Listen normally on 13 vxlan.calico [fe80::6491:4bff:fe51:af29%8]:123 Apr 30 00:46:05.864943 ntpd[1987]: Listen normally on 14 cali309d4b24323 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 30 00:46:05.865095 ntpd[1987]: Listen normally on 15 cali782318dfcc0 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 30 00:46:06.081146 sshd[5766]: Accepted publickey for core from 147.75.109.163 port 53482 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:06.089082 sshd[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:06.109621 systemd-logind[1995]: New session 8 of user core. Apr 30 00:46:06.119510 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:46:06.575169 sshd[5766]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:06.587724 systemd[1]: sshd@7-172.31.25.63:22-147.75.109.163:53482.service: Deactivated successfully. Apr 30 00:46:06.594751 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:46:06.598868 systemd-logind[1995]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:46:06.601414 systemd-logind[1995]: Removed session 8. Apr 30 00:46:06.665055 containerd[2017]: time="2025-04-30T00:46:06.664989266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:46:06.666520 containerd[2017]: time="2025-04-30T00:46:06.666469034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" Apr 30 00:46:06.668024 containerd[2017]: time="2025-04-30T00:46:06.667319486Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:46:06.671486 containerd[2017]: time="2025-04-30T00:46:06.671418086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:46:06.673335 containerd[2017]: time="2025-04-30T00:46:06.673270838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 4.684684056s" Apr 30 00:46:06.673335 containerd[2017]: time="2025-04-30T00:46:06.673330238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:46:06.680170 containerd[2017]: time="2025-04-30T00:46:06.679852406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 00:46:06.693745 containerd[2017]: time="2025-04-30T00:46:06.693511286Z" level=info msg="CreateContainer within sandbox \"4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:46:06.719478 containerd[2017]: time="2025-04-30T00:46:06.719309498Z" level=info msg="CreateContainer within sandbox \"4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a11265351b4d0e8405304e5b3eb86f30eca3c8c15d4b3f584506a8549b4c3e96\"" Apr 30 00:46:06.721132 containerd[2017]: time="2025-04-30T00:46:06.720315230Z" level=info msg="StartContainer for \"a11265351b4d0e8405304e5b3eb86f30eca3c8c15d4b3f584506a8549b4c3e96\"" Apr 30 00:46:06.789489 systemd[1]: Started cri-containerd-a11265351b4d0e8405304e5b3eb86f30eca3c8c15d4b3f584506a8549b4c3e96.scope - libcontainer container a11265351b4d0e8405304e5b3eb86f30eca3c8c15d4b3f584506a8549b4c3e96. Apr 30 00:46:06.881702 containerd[2017]: time="2025-04-30T00:46:06.881535771Z" level=info msg="StartContainer for \"a11265351b4d0e8405304e5b3eb86f30eca3c8c15d4b3f584506a8549b4c3e96\" returns successfully" Apr 30 00:46:06.946679 kubelet[3213]: I0430 00:46:06.946591 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8cg7x" podStartSLOduration=28.024537306 podStartE2EDuration="33.946545267s" podCreationTimestamp="2025-04-30 00:45:33 +0000 UTC" firstStartedPulling="2025-04-30 00:45:56.063051565 +0000 UTC m=+41.022589069" lastFinishedPulling="2025-04-30 00:46:01.985059526 +0000 UTC m=+46.944597030" observedRunningTime="2025-04-30 00:46:02.880394399 +0000 UTC m=+47.839931927" watchObservedRunningTime="2025-04-30 00:46:06.946545267 +0000 UTC m=+51.906082759" Apr 30 00:46:07.067863 containerd[2017]: time="2025-04-30T00:46:07.066172212Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:46:07.069227 containerd[2017]: time="2025-04-30T00:46:07.069179028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 00:46:07.073675 containerd[2017]: time="2025-04-30T00:46:07.073593684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 393.657998ms" Apr 30 00:46:07.074191 containerd[2017]: time="2025-04-30T00:46:07.073863156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" Apr 30 00:46:07.077632 containerd[2017]: time="2025-04-30T00:46:07.077235084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 00:46:07.079103 containerd[2017]: time="2025-04-30T00:46:07.078842340Z" level=info msg="CreateContainer within sandbox \"6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 00:46:07.114442 containerd[2017]: time="2025-04-30T00:46:07.112598844Z" level=info msg="CreateContainer within sandbox \"6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"38a158cfdf93d2c0b16958dba2df63f0fcdd1c11f462468c8449a3457b7deecf\"" Apr 30 00:46:07.117210 containerd[2017]: time="2025-04-30T00:46:07.116397516Z" level=info msg="StartContainer for \"38a158cfdf93d2c0b16958dba2df63f0fcdd1c11f462468c8449a3457b7deecf\"" Apr 30 00:46:07.193419 systemd[1]: Started cri-containerd-38a158cfdf93d2c0b16958dba2df63f0fcdd1c11f462468c8449a3457b7deecf.scope - libcontainer container 38a158cfdf93d2c0b16958dba2df63f0fcdd1c11f462468c8449a3457b7deecf. Apr 30 00:46:07.291685 containerd[2017]: time="2025-04-30T00:46:07.291502777Z" level=info msg="StartContainer for \"38a158cfdf93d2c0b16958dba2df63f0fcdd1c11f462468c8449a3457b7deecf\" returns successfully" Apr 30 00:46:07.956985 kubelet[3213]: I0430 00:46:07.956876 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-685b87fb46-6z5zg" podStartSLOduration=29.310020882 podStartE2EDuration="36.956851864s" podCreationTimestamp="2025-04-30 00:45:31 +0000 UTC" firstStartedPulling="2025-04-30 00:45:59.031832236 +0000 UTC m=+43.991369740" lastFinishedPulling="2025-04-30 00:46:06.67866323 +0000 UTC m=+51.638200722" observedRunningTime="2025-04-30 00:46:06.951688791 +0000 UTC m=+51.911226319" watchObservedRunningTime="2025-04-30 00:46:07.956851864 +0000 UTC m=+52.916389368" Apr 30 00:46:08.924388 kubelet[3213]: I0430 00:46:08.922614 3213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:46:09.195893 kubelet[3213]: I0430 00:46:09.195482 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-685b87fb46-zcfxw" podStartSLOduration=33.507246874 podStartE2EDuration="38.195458438s" podCreationTimestamp="2025-04-30 00:45:31 +0000 UTC" firstStartedPulling="2025-04-30 00:46:02.386800856 +0000 UTC m=+47.346338360" lastFinishedPulling="2025-04-30 00:46:07.07501242 +0000 UTC m=+52.034549924" observedRunningTime="2025-04-30 00:46:07.95859598 +0000 UTC m=+52.918133520" watchObservedRunningTime="2025-04-30 00:46:09.195458438 +0000 UTC m=+54.154995930" Apr 30 00:46:09.893215 containerd[2017]: time="2025-04-30T00:46:09.892584354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:46:09.895133 containerd[2017]: time="2025-04-30T00:46:09.895049538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" Apr 30 00:46:09.898320 containerd[2017]: time="2025-04-30T00:46:09.898221858Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:46:09.902960 containerd[2017]: time="2025-04-30T00:46:09.902585430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:46:09.904695 containerd[2017]: time="2025-04-30T00:46:09.903974850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 2.826658118s" Apr 30 00:46:09.904695 containerd[2017]: time="2025-04-30T00:46:09.904039782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" Apr 30 00:46:09.943186 containerd[2017]: time="2025-04-30T00:46:09.943061658Z" level=info msg="CreateContainer within sandbox \"eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 00:46:09.980978 containerd[2017]: time="2025-04-30T00:46:09.980899746Z" level=info msg="CreateContainer within sandbox \"eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7a1634c7359a3eb2ae6031e34605106388ca778eb743f7b8b635b04f8c4c34ef\"" Apr 30 00:46:09.982898 containerd[2017]: time="2025-04-30T00:46:09.982193550Z" level=info msg="StartContainer for \"7a1634c7359a3eb2ae6031e34605106388ca778eb743f7b8b635b04f8c4c34ef\"" Apr 30 00:46:10.071684 systemd[1]: Started cri-containerd-7a1634c7359a3eb2ae6031e34605106388ca778eb743f7b8b635b04f8c4c34ef.scope - libcontainer container 7a1634c7359a3eb2ae6031e34605106388ca778eb743f7b8b635b04f8c4c34ef. Apr 30 00:46:10.234047 containerd[2017]: time="2025-04-30T00:46:10.233865555Z" level=info msg="StartContainer for \"7a1634c7359a3eb2ae6031e34605106388ca778eb743f7b8b635b04f8c4c34ef\" returns successfully" Apr 30 00:46:11.041100 kubelet[3213]: I0430 00:46:11.040431 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bfc54cc8f-c7qvh" podStartSLOduration=30.561324526 podStartE2EDuration="38.040407195s" podCreationTimestamp="2025-04-30 00:45:33 +0000 UTC" firstStartedPulling="2025-04-30 00:46:02.429965025 +0000 UTC m=+47.389502529" lastFinishedPulling="2025-04-30 00:46:09.909047694 +0000 UTC m=+54.868585198" observedRunningTime="2025-04-30 00:46:10.971964799 +0000 UTC m=+55.931502423" watchObservedRunningTime="2025-04-30 00:46:11.040407195 +0000 UTC m=+55.999944699" Apr 30 00:46:11.628695 systemd[1]: Started sshd@8-172.31.25.63:22-147.75.109.163:46924.service - OpenSSH per-connection server daemon (147.75.109.163:46924). Apr 30 00:46:11.907615 sshd[5942]: Accepted publickey for core from 147.75.109.163 port 46924 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:11.910841 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:11.919284 systemd-logind[1995]: New session 9 of user core. Apr 30 00:46:11.932386 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:46:12.235865 sshd[5942]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:12.243619 systemd[1]: sshd@8-172.31.25.63:22-147.75.109.163:46924.service: Deactivated successfully. Apr 30 00:46:12.248083 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:46:12.249920 systemd-logind[1995]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:46:12.251949 systemd-logind[1995]: Removed session 9. Apr 30 00:46:15.345351 containerd[2017]: time="2025-04-30T00:46:15.345295689Z" level=info msg="StopPodSandbox for \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\"" Apr 30 00:46:15.525182 containerd[2017]: 2025-04-30 00:46:15.456 [WARNING][5971] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"03ba2d14-056d-41e7-b778-1e52ace77241", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1", Pod:"coredns-668d6bf9bc-cgtrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f6c8d0a07d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:15.525182 containerd[2017]: 2025-04-30 00:46:15.456 [INFO][5971] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:46:15.525182 containerd[2017]: 2025-04-30 00:46:15.456 [INFO][5971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" iface="eth0" netns="" Apr 30 00:46:15.525182 containerd[2017]: 2025-04-30 00:46:15.456 [INFO][5971] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:46:15.525182 containerd[2017]: 2025-04-30 00:46:15.457 [INFO][5971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:46:15.525182 containerd[2017]: 2025-04-30 00:46:15.500 [INFO][5979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" HandleID="k8s-pod-network.85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:46:15.525182 containerd[2017]: 2025-04-30 00:46:15.500 [INFO][5979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:15.525182 containerd[2017]: 2025-04-30 00:46:15.500 [INFO][5979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:15.525182 containerd[2017]: 2025-04-30 00:46:15.516 [WARNING][5979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" HandleID="k8s-pod-network.85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:46:15.525182 containerd[2017]: 2025-04-30 00:46:15.516 [INFO][5979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" HandleID="k8s-pod-network.85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:46:15.525182 containerd[2017]: 2025-04-30 00:46:15.519 [INFO][5979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:15.525182 containerd[2017]: 2025-04-30 00:46:15.522 [INFO][5971] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:46:15.527038 containerd[2017]: time="2025-04-30T00:46:15.525331858Z" level=info msg="TearDown network for sandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\" successfully" Apr 30 00:46:15.527038 containerd[2017]: time="2025-04-30T00:46:15.525373366Z" level=info msg="StopPodSandbox for \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\" returns successfully" Apr 30 00:46:15.527038 containerd[2017]: time="2025-04-30T00:46:15.526703818Z" level=info msg="RemovePodSandbox for \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\"" Apr 30 00:46:15.527038 containerd[2017]: time="2025-04-30T00:46:15.526759558Z" level=info msg="Forcibly stopping sandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\"" Apr 30 00:46:15.682100 containerd[2017]: 2025-04-30 00:46:15.615 [WARNING][5998] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"03ba2d14-056d-41e7-b778-1e52ace77241", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"d7c6b74e271b85914e0943442a46b6222bf316a48db20220e8620ef0df2ed2d1", Pod:"coredns-668d6bf9bc-cgtrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f6c8d0a07d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:15.682100 containerd[2017]: 2025-04-30 00:46:15.615 [INFO][5998] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:46:15.682100 containerd[2017]: 2025-04-30 00:46:15.615 [INFO][5998] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" iface="eth0" netns="" Apr 30 00:46:15.682100 containerd[2017]: 2025-04-30 00:46:15.615 [INFO][5998] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:46:15.682100 containerd[2017]: 2025-04-30 00:46:15.615 [INFO][5998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:46:15.682100 containerd[2017]: 2025-04-30 00:46:15.655 [INFO][6005] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" HandleID="k8s-pod-network.85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:46:15.682100 containerd[2017]: 2025-04-30 00:46:15.655 [INFO][6005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:15.682100 containerd[2017]: 2025-04-30 00:46:15.655 [INFO][6005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:15.682100 containerd[2017]: 2025-04-30 00:46:15.671 [WARNING][6005] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" HandleID="k8s-pod-network.85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:46:15.682100 containerd[2017]: 2025-04-30 00:46:15.671 [INFO][6005] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" HandleID="k8s-pod-network.85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--cgtrr-eth0" Apr 30 00:46:15.682100 containerd[2017]: 2025-04-30 00:46:15.673 [INFO][6005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:15.682100 containerd[2017]: 2025-04-30 00:46:15.676 [INFO][5998] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532" Apr 30 00:46:15.682100 containerd[2017]: time="2025-04-30T00:46:15.678874378Z" level=info msg="TearDown network for sandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\" successfully" Apr 30 00:46:15.691081 containerd[2017]: time="2025-04-30T00:46:15.690499895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:46:15.691081 containerd[2017]: time="2025-04-30T00:46:15.690614351Z" level=info msg="RemovePodSandbox \"85ca374109378b2523f39597df5db0bd098133583335b3dc95531f821b97d532\" returns successfully" Apr 30 00:46:15.692923 containerd[2017]: time="2025-04-30T00:46:15.691695719Z" level=info msg="StopPodSandbox for \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\"" Apr 30 00:46:15.834916 containerd[2017]: 2025-04-30 00:46:15.771 [WARNING][6024] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0", GenerateName:"calico-apiserver-685b87fb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"91b11be6-074e-42d1-997b-badf85674f33", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685b87fb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab", Pod:"calico-apiserver-685b87fb46-6z5zg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac6d742a5df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:15.834916 containerd[2017]: 2025-04-30 00:46:15.772 [INFO][6024] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:46:15.834916 containerd[2017]: 2025-04-30 00:46:15.772 [INFO][6024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" iface="eth0" netns="" Apr 30 00:46:15.834916 containerd[2017]: 2025-04-30 00:46:15.772 [INFO][6024] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:46:15.834916 containerd[2017]: 2025-04-30 00:46:15.772 [INFO][6024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:46:15.834916 containerd[2017]: 2025-04-30 00:46:15.814 [INFO][6031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" HandleID="k8s-pod-network.c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:46:15.834916 containerd[2017]: 2025-04-30 00:46:15.814 [INFO][6031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:15.834916 containerd[2017]: 2025-04-30 00:46:15.815 [INFO][6031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:15.834916 containerd[2017]: 2025-04-30 00:46:15.827 [WARNING][6031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" HandleID="k8s-pod-network.c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:46:15.834916 containerd[2017]: 2025-04-30 00:46:15.827 [INFO][6031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" HandleID="k8s-pod-network.c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:46:15.834916 containerd[2017]: 2025-04-30 00:46:15.830 [INFO][6031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:15.834916 containerd[2017]: 2025-04-30 00:46:15.832 [INFO][6024] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:46:15.836826 containerd[2017]: time="2025-04-30T00:46:15.835874135Z" level=info msg="TearDown network for sandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\" successfully" Apr 30 00:46:15.836826 containerd[2017]: time="2025-04-30T00:46:15.835919999Z" level=info msg="StopPodSandbox for \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\" returns successfully" Apr 30 00:46:15.836826 containerd[2017]: time="2025-04-30T00:46:15.836728007Z" level=info msg="RemovePodSandbox for \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\"" Apr 30 00:46:15.837553 containerd[2017]: time="2025-04-30T00:46:15.836774567Z" level=info msg="Forcibly stopping sandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\"" Apr 30 00:46:15.988852 containerd[2017]: 2025-04-30 00:46:15.912 [WARNING][6049] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0", GenerateName:"calico-apiserver-685b87fb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"91b11be6-074e-42d1-997b-badf85674f33", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685b87fb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"4cc63158e449f245cc3a6a109cace1e9753fb0528daa21742370f6c07e6871ab", Pod:"calico-apiserver-685b87fb46-6z5zg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac6d742a5df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:15.988852 containerd[2017]: 2025-04-30 00:46:15.913 [INFO][6049] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:46:15.988852 containerd[2017]: 2025-04-30 00:46:15.913 [INFO][6049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" iface="eth0" netns="" Apr 30 00:46:15.988852 containerd[2017]: 2025-04-30 00:46:15.913 [INFO][6049] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:46:15.988852 containerd[2017]: 2025-04-30 00:46:15.913 [INFO][6049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:46:15.988852 containerd[2017]: 2025-04-30 00:46:15.962 [INFO][6056] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" HandleID="k8s-pod-network.c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:46:15.988852 containerd[2017]: 2025-04-30 00:46:15.962 [INFO][6056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:15.988852 containerd[2017]: 2025-04-30 00:46:15.962 [INFO][6056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:15.988852 containerd[2017]: 2025-04-30 00:46:15.979 [WARNING][6056] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" HandleID="k8s-pod-network.c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:46:15.988852 containerd[2017]: 2025-04-30 00:46:15.980 [INFO][6056] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" HandleID="k8s-pod-network.c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--6z5zg-eth0" Apr 30 00:46:15.988852 containerd[2017]: 2025-04-30 00:46:15.982 [INFO][6056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:15.988852 containerd[2017]: 2025-04-30 00:46:15.985 [INFO][6049] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55" Apr 30 00:46:15.988852 containerd[2017]: time="2025-04-30T00:46:15.988808028Z" level=info msg="TearDown network for sandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\" successfully" Apr 30 00:46:15.996541 containerd[2017]: time="2025-04-30T00:46:15.996476160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:46:15.996736 containerd[2017]: time="2025-04-30T00:46:15.996624312Z" level=info msg="RemovePodSandbox \"c5732ce262d310c9c1fdf7cf91a90cb831d379f794a31be13774d362751e4a55\" returns successfully" Apr 30 00:46:15.997852 containerd[2017]: time="2025-04-30T00:46:15.997619364Z" level=info msg="StopPodSandbox for \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\"" Apr 30 00:46:16.174355 containerd[2017]: 2025-04-30 00:46:16.090 [WARNING][6074] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0", GenerateName:"calico-apiserver-685b87fb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"d404a0b8-b738-4be9-a74e-856f784c975d", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685b87fb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86", Pod:"calico-apiserver-685b87fb46-zcfxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali309d4b24323", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:16.174355 containerd[2017]: 2025-04-30 00:46:16.090 [INFO][6074] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:46:16.174355 containerd[2017]: 2025-04-30 00:46:16.090 [INFO][6074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" iface="eth0" netns="" Apr 30 00:46:16.174355 containerd[2017]: 2025-04-30 00:46:16.090 [INFO][6074] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:46:16.174355 containerd[2017]: 2025-04-30 00:46:16.091 [INFO][6074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:46:16.174355 containerd[2017]: 2025-04-30 00:46:16.149 [INFO][6084] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" HandleID="k8s-pod-network.26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:16.174355 containerd[2017]: 2025-04-30 00:46:16.150 [INFO][6084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:16.174355 containerd[2017]: 2025-04-30 00:46:16.150 [INFO][6084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:16.174355 containerd[2017]: 2025-04-30 00:46:16.165 [WARNING][6084] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" HandleID="k8s-pod-network.26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:16.174355 containerd[2017]: 2025-04-30 00:46:16.165 [INFO][6084] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" HandleID="k8s-pod-network.26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:16.174355 containerd[2017]: 2025-04-30 00:46:16.168 [INFO][6084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:16.174355 containerd[2017]: 2025-04-30 00:46:16.171 [INFO][6074] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:46:16.178175 containerd[2017]: time="2025-04-30T00:46:16.177916281Z" level=info msg="TearDown network for sandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\" successfully" Apr 30 00:46:16.178175 containerd[2017]: time="2025-04-30T00:46:16.177972273Z" level=info msg="StopPodSandbox for \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\" returns successfully" Apr 30 00:46:16.179300 containerd[2017]: time="2025-04-30T00:46:16.179241117Z" level=info msg="RemovePodSandbox for \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\"" Apr 30 00:46:16.179391 containerd[2017]: time="2025-04-30T00:46:16.179300913Z" level=info msg="Forcibly stopping sandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\"" Apr 30 00:46:16.315277 containerd[2017]: 2025-04-30 00:46:16.249 [WARNING][6102] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0", GenerateName:"calico-apiserver-685b87fb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"d404a0b8-b738-4be9-a74e-856f784c975d", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685b87fb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"6a23c9b6495d8ac9ef5e14ad058af7bb4c3654d412246cc02cdbf6cae8103f86", Pod:"calico-apiserver-685b87fb46-zcfxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali309d4b24323", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:16.315277 containerd[2017]: 2025-04-30 00:46:16.249 [INFO][6102] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:46:16.315277 containerd[2017]: 2025-04-30 00:46:16.249 [INFO][6102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" iface="eth0" netns="" Apr 30 00:46:16.315277 containerd[2017]: 2025-04-30 00:46:16.250 [INFO][6102] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:46:16.315277 containerd[2017]: 2025-04-30 00:46:16.250 [INFO][6102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:46:16.315277 containerd[2017]: 2025-04-30 00:46:16.289 [INFO][6109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" HandleID="k8s-pod-network.26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:16.315277 containerd[2017]: 2025-04-30 00:46:16.289 [INFO][6109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:16.315277 containerd[2017]: 2025-04-30 00:46:16.289 [INFO][6109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:16.315277 containerd[2017]: 2025-04-30 00:46:16.305 [WARNING][6109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" HandleID="k8s-pod-network.26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:16.315277 containerd[2017]: 2025-04-30 00:46:16.305 [INFO][6109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" HandleID="k8s-pod-network.26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Workload="ip--172--31--25--63-k8s-calico--apiserver--685b87fb46--zcfxw-eth0" Apr 30 00:46:16.315277 containerd[2017]: 2025-04-30 00:46:16.307 [INFO][6109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:16.315277 containerd[2017]: 2025-04-30 00:46:16.310 [INFO][6102] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028" Apr 30 00:46:16.315277 containerd[2017]: time="2025-04-30T00:46:16.314878186Z" level=info msg="TearDown network for sandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\" successfully" Apr 30 00:46:16.324033 containerd[2017]: time="2025-04-30T00:46:16.323926810Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:46:16.324224 containerd[2017]: time="2025-04-30T00:46:16.324101794Z" level=info msg="RemovePodSandbox \"26e04e2ef48495919ec4c25f6cd521a807669d219e6cc3f4e8e5477671f20028\" returns successfully" Apr 30 00:46:16.324791 containerd[2017]: time="2025-04-30T00:46:16.324734170Z" level=info msg="StopPodSandbox for \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\"" Apr 30 00:46:16.467708 containerd[2017]: 2025-04-30 00:46:16.394 [WARNING][6127] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0", GenerateName:"calico-kube-controllers-7bfc54cc8f-", Namespace:"calico-system", SelfLink:"", UID:"da044bc6-8690-4c3f-9ab1-4db58205c9ed", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfc54cc8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa", Pod:"calico-kube-controllers-7bfc54cc8f-c7qvh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali782318dfcc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:16.467708 containerd[2017]: 2025-04-30 00:46:16.395 [INFO][6127] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:46:16.467708 containerd[2017]: 2025-04-30 00:46:16.395 [INFO][6127] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" iface="eth0" netns="" Apr 30 00:46:16.467708 containerd[2017]: 2025-04-30 00:46:16.395 [INFO][6127] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:46:16.467708 containerd[2017]: 2025-04-30 00:46:16.395 [INFO][6127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:46:16.467708 containerd[2017]: 2025-04-30 00:46:16.445 [INFO][6134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" HandleID="k8s-pod-network.c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Workload="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:16.467708 containerd[2017]: 2025-04-30 00:46:16.446 [INFO][6134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:16.467708 containerd[2017]: 2025-04-30 00:46:16.446 [INFO][6134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:16.467708 containerd[2017]: 2025-04-30 00:46:16.460 [WARNING][6134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" HandleID="k8s-pod-network.c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Workload="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:16.467708 containerd[2017]: 2025-04-30 00:46:16.460 [INFO][6134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" HandleID="k8s-pod-network.c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Workload="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:16.467708 containerd[2017]: 2025-04-30 00:46:16.462 [INFO][6134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:16.467708 containerd[2017]: 2025-04-30 00:46:16.465 [INFO][6127] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:46:16.467708 containerd[2017]: time="2025-04-30T00:46:16.467610958Z" level=info msg="TearDown network for sandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\" successfully" Apr 30 00:46:16.470032 containerd[2017]: time="2025-04-30T00:46:16.468165142Z" level=info msg="StopPodSandbox for \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\" returns successfully" Apr 30 00:46:16.470032 containerd[2017]: time="2025-04-30T00:46:16.468969694Z" level=info msg="RemovePodSandbox for \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\"" Apr 30 00:46:16.470032 containerd[2017]: time="2025-04-30T00:46:16.469017046Z" level=info msg="Forcibly stopping sandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\"" Apr 30 00:46:16.625444 containerd[2017]: 2025-04-30 00:46:16.547 [WARNING][6153] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0", GenerateName:"calico-kube-controllers-7bfc54cc8f-", Namespace:"calico-system", SelfLink:"", UID:"da044bc6-8690-4c3f-9ab1-4db58205c9ed", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfc54cc8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"eeb8adf6e59ce393591bfe16272d0c27e3e1bfc36cc7bb72f77ad328b8d602fa", Pod:"calico-kube-controllers-7bfc54cc8f-c7qvh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali782318dfcc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:16.625444 containerd[2017]: 2025-04-30 00:46:16.547 [INFO][6153] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:46:16.625444 containerd[2017]: 2025-04-30 00:46:16.547 [INFO][6153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" iface="eth0" netns="" Apr 30 00:46:16.625444 containerd[2017]: 2025-04-30 00:46:16.547 [INFO][6153] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:46:16.625444 containerd[2017]: 2025-04-30 00:46:16.547 [INFO][6153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:46:16.625444 containerd[2017]: 2025-04-30 00:46:16.601 [INFO][6160] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" HandleID="k8s-pod-network.c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Workload="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:16.625444 containerd[2017]: 2025-04-30 00:46:16.601 [INFO][6160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:16.625444 containerd[2017]: 2025-04-30 00:46:16.601 [INFO][6160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:16.625444 containerd[2017]: 2025-04-30 00:46:16.616 [WARNING][6160] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" HandleID="k8s-pod-network.c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Workload="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:16.625444 containerd[2017]: 2025-04-30 00:46:16.616 [INFO][6160] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" HandleID="k8s-pod-network.c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Workload="ip--172--31--25--63-k8s-calico--kube--controllers--7bfc54cc8f--c7qvh-eth0" Apr 30 00:46:16.625444 containerd[2017]: 2025-04-30 00:46:16.619 [INFO][6160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:16.625444 containerd[2017]: 2025-04-30 00:46:16.622 [INFO][6153] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267" Apr 30 00:46:16.626542 containerd[2017]: time="2025-04-30T00:46:16.625515683Z" level=info msg="TearDown network for sandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\" successfully" Apr 30 00:46:16.633013 containerd[2017]: time="2025-04-30T00:46:16.632917991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:46:16.634515 containerd[2017]: time="2025-04-30T00:46:16.633063059Z" level=info msg="RemovePodSandbox \"c33522e2404167515c0fce286c404d42a583b72e42734d4788bb644e2b8ec267\" returns successfully" Apr 30 00:46:16.634515 containerd[2017]: time="2025-04-30T00:46:16.634005215Z" level=info msg="StopPodSandbox for \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\"" Apr 30 00:46:16.769364 containerd[2017]: 2025-04-30 00:46:16.706 [WARNING][6179] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb16121b-e738-4b90-8cde-fe4b2beb11f1", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272", Pod:"csi-node-driver-8cg7x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia759c9b5b7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:16.769364 containerd[2017]: 2025-04-30 00:46:16.706 [INFO][6179] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:46:16.769364 containerd[2017]: 2025-04-30 00:46:16.706 [INFO][6179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" iface="eth0" netns="" Apr 30 00:46:16.769364 containerd[2017]: 2025-04-30 00:46:16.706 [INFO][6179] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:46:16.769364 containerd[2017]: 2025-04-30 00:46:16.706 [INFO][6179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:46:16.769364 containerd[2017]: 2025-04-30 00:46:16.747 [INFO][6186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" HandleID="k8s-pod-network.f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Workload="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:46:16.769364 containerd[2017]: 2025-04-30 00:46:16.747 [INFO][6186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:16.769364 containerd[2017]: 2025-04-30 00:46:16.747 [INFO][6186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:16.769364 containerd[2017]: 2025-04-30 00:46:16.761 [WARNING][6186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" HandleID="k8s-pod-network.f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Workload="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:46:16.769364 containerd[2017]: 2025-04-30 00:46:16.761 [INFO][6186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" HandleID="k8s-pod-network.f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Workload="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:46:16.769364 containerd[2017]: 2025-04-30 00:46:16.764 [INFO][6186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:16.769364 containerd[2017]: 2025-04-30 00:46:16.766 [INFO][6179] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:46:16.770344 containerd[2017]: time="2025-04-30T00:46:16.769418052Z" level=info msg="TearDown network for sandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\" successfully" Apr 30 00:46:16.770344 containerd[2017]: time="2025-04-30T00:46:16.769455864Z" level=info msg="StopPodSandbox for \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\" returns successfully" Apr 30 00:46:16.770640 containerd[2017]: time="2025-04-30T00:46:16.770597652Z" level=info msg="RemovePodSandbox for \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\"" Apr 30 00:46:16.770735 containerd[2017]: time="2025-04-30T00:46:16.770651292Z" level=info msg="Forcibly stopping sandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\"" Apr 30 00:46:16.901245 containerd[2017]: 2025-04-30 00:46:16.836 [WARNING][6205] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb16121b-e738-4b90-8cde-fe4b2beb11f1", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"8a2a4eff8b555143611136ec3e21564853c3d09dd93550e49fc1e538e53ac272", Pod:"csi-node-driver-8cg7x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia759c9b5b7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:16.901245 containerd[2017]: 2025-04-30 00:46:16.837 [INFO][6205] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:46:16.901245 containerd[2017]: 2025-04-30 00:46:16.837 [INFO][6205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" iface="eth0" netns="" Apr 30 00:46:16.901245 containerd[2017]: 2025-04-30 00:46:16.837 [INFO][6205] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:46:16.901245 containerd[2017]: 2025-04-30 00:46:16.837 [INFO][6205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:46:16.901245 containerd[2017]: 2025-04-30 00:46:16.877 [INFO][6213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" HandleID="k8s-pod-network.f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Workload="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:46:16.901245 containerd[2017]: 2025-04-30 00:46:16.877 [INFO][6213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:16.901245 containerd[2017]: 2025-04-30 00:46:16.877 [INFO][6213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:16.901245 containerd[2017]: 2025-04-30 00:46:16.892 [WARNING][6213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" HandleID="k8s-pod-network.f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Workload="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:46:16.901245 containerd[2017]: 2025-04-30 00:46:16.892 [INFO][6213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" HandleID="k8s-pod-network.f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Workload="ip--172--31--25--63-k8s-csi--node--driver--8cg7x-eth0" Apr 30 00:46:16.901245 containerd[2017]: 2025-04-30 00:46:16.894 [INFO][6213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:16.901245 containerd[2017]: 2025-04-30 00:46:16.898 [INFO][6205] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0" Apr 30 00:46:16.901245 containerd[2017]: time="2025-04-30T00:46:16.901140889Z" level=info msg="TearDown network for sandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\" successfully" Apr 30 00:46:16.909360 containerd[2017]: time="2025-04-30T00:46:16.909246001Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:46:16.909360 containerd[2017]: time="2025-04-30T00:46:16.909344005Z" level=info msg="RemovePodSandbox \"f0aa08d174d294723c88a963336919b1b9764fa9aba8e4da4e188040c0eaf8b0\" returns successfully" Apr 30 00:46:16.910300 containerd[2017]: time="2025-04-30T00:46:16.909913513Z" level=info msg="StopPodSandbox for \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\"" Apr 30 00:46:17.051180 containerd[2017]: 2025-04-30 00:46:16.977 [WARNING][6231] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179", Pod:"coredns-668d6bf9bc-2wbvz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80aeefc76bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:17.051180 containerd[2017]: 2025-04-30 00:46:16.979 [INFO][6231] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:46:17.051180 containerd[2017]: 2025-04-30 00:46:16.979 [INFO][6231] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" iface="eth0" netns="" Apr 30 00:46:17.051180 containerd[2017]: 2025-04-30 00:46:16.979 [INFO][6231] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:46:17.051180 containerd[2017]: 2025-04-30 00:46:16.979 [INFO][6231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:46:17.051180 containerd[2017]: 2025-04-30 00:46:17.029 [INFO][6238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" HandleID="k8s-pod-network.9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:17.051180 containerd[2017]: 2025-04-30 00:46:17.030 [INFO][6238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:17.051180 containerd[2017]: 2025-04-30 00:46:17.030 [INFO][6238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:17.051180 containerd[2017]: 2025-04-30 00:46:17.043 [WARNING][6238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" HandleID="k8s-pod-network.9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:17.051180 containerd[2017]: 2025-04-30 00:46:17.043 [INFO][6238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" HandleID="k8s-pod-network.9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:17.051180 containerd[2017]: 2025-04-30 00:46:17.045 [INFO][6238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:17.051180 containerd[2017]: 2025-04-30 00:46:17.048 [INFO][6231] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:46:17.052538 containerd[2017]: time="2025-04-30T00:46:17.051241089Z" level=info msg="TearDown network for sandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\" successfully" Apr 30 00:46:17.052538 containerd[2017]: time="2025-04-30T00:46:17.051281805Z" level=info msg="StopPodSandbox for \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\" returns successfully" Apr 30 00:46:17.052538 containerd[2017]: time="2025-04-30T00:46:17.052057161Z" level=info msg="RemovePodSandbox for \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\"" Apr 30 00:46:17.052538 containerd[2017]: time="2025-04-30T00:46:17.052130733Z" level=info msg="Forcibly stopping sandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\"" Apr 30 00:46:17.248589 containerd[2017]: 2025-04-30 00:46:17.133 [WARNING][6256] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9e2cbcdd-5e79-4f1d-b433-3d65cef0e3e1", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 0, 45, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-63", ContainerID:"b98e36f7b7aed426f3c1061d8869ac6287fb63457cccbb2b97665b9a5ccfe179", Pod:"coredns-668d6bf9bc-2wbvz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80aeefc76bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 00:46:17.248589 containerd[2017]: 2025-04-30 00:46:17.134 [INFO][6256] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:46:17.248589 containerd[2017]: 2025-04-30 00:46:17.134 [INFO][6256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" iface="eth0" netns="" Apr 30 00:46:17.248589 containerd[2017]: 2025-04-30 00:46:17.135 [INFO][6256] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:46:17.248589 containerd[2017]: 2025-04-30 00:46:17.135 [INFO][6256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:46:17.248589 containerd[2017]: 2025-04-30 00:46:17.209 [INFO][6263] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" HandleID="k8s-pod-network.9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:17.248589 containerd[2017]: 2025-04-30 00:46:17.209 [INFO][6263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 00:46:17.248589 containerd[2017]: 2025-04-30 00:46:17.209 [INFO][6263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 00:46:17.248589 containerd[2017]: 2025-04-30 00:46:17.234 [WARNING][6263] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" HandleID="k8s-pod-network.9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:17.248589 containerd[2017]: 2025-04-30 00:46:17.235 [INFO][6263] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" HandleID="k8s-pod-network.9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Workload="ip--172--31--25--63-k8s-coredns--668d6bf9bc--2wbvz-eth0" Apr 30 00:46:17.248589 containerd[2017]: 2025-04-30 00:46:17.238 [INFO][6263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 00:46:17.248589 containerd[2017]: 2025-04-30 00:46:17.245 [INFO][6256] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678" Apr 30 00:46:17.252951 containerd[2017]: time="2025-04-30T00:46:17.248576578Z" level=info msg="TearDown network for sandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\" successfully" Apr 30 00:46:17.264405 containerd[2017]: time="2025-04-30T00:46:17.264261562Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:46:17.264405 containerd[2017]: time="2025-04-30T00:46:17.264383506Z" level=info msg="RemovePodSandbox \"9bfcbfbb945387b3b5aae45772b9cbd27a9c738add67749c7f4041eb60262678\" returns successfully" Apr 30 00:46:17.291624 systemd[1]: Started sshd@9-172.31.25.63:22-147.75.109.163:34950.service - OpenSSH per-connection server daemon (147.75.109.163:34950). Apr 30 00:46:17.563695 sshd[6270]: Accepted publickey for core from 147.75.109.163 port 34950 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:17.569496 sshd[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:17.578849 systemd-logind[1995]: New session 10 of user core. Apr 30 00:46:17.589405 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:46:17.898258 sshd[6270]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:17.904664 systemd[1]: sshd@9-172.31.25.63:22-147.75.109.163:34950.service: Deactivated successfully. Apr 30 00:46:17.910589 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:46:17.912407 systemd-logind[1995]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:46:17.914075 systemd-logind[1995]: Removed session 10. Apr 30 00:46:17.951648 systemd[1]: Started sshd@10-172.31.25.63:22-147.75.109.163:34952.service - OpenSSH per-connection server daemon (147.75.109.163:34952). Apr 30 00:46:18.214903 sshd[6284]: Accepted publickey for core from 147.75.109.163 port 34952 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:18.218176 sshd[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:18.226469 systemd-logind[1995]: New session 11 of user core. Apr 30 00:46:18.237444 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:46:18.609729 sshd[6284]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:18.622087 systemd[1]: sshd@10-172.31.25.63:22-147.75.109.163:34952.service: Deactivated successfully. Apr 30 00:46:18.632325 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:46:18.635275 systemd-logind[1995]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:46:18.638848 systemd-logind[1995]: Removed session 11. Apr 30 00:46:18.664649 systemd[1]: Started sshd@11-172.31.25.63:22-147.75.109.163:34958.service - OpenSSH per-connection server daemon (147.75.109.163:34958). Apr 30 00:46:18.936302 sshd[6294]: Accepted publickey for core from 147.75.109.163 port 34958 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:18.938830 sshd[6294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:18.947280 systemd-logind[1995]: New session 12 of user core. Apr 30 00:46:18.958401 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:46:19.247513 sshd[6294]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:19.255055 systemd[1]: sshd@11-172.31.25.63:22-147.75.109.163:34958.service: Deactivated successfully. Apr 30 00:46:19.260324 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:46:19.262756 systemd-logind[1995]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:46:19.265241 systemd-logind[1995]: Removed session 12. Apr 30 00:46:24.304644 systemd[1]: Started sshd@12-172.31.25.63:22-147.75.109.163:34972.service - OpenSSH per-connection server daemon (147.75.109.163:34972). Apr 30 00:46:24.579318 sshd[6343]: Accepted publickey for core from 147.75.109.163 port 34972 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:24.582074 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:24.591155 systemd-logind[1995]: New session 13 of user core. Apr 30 00:46:24.598387 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:46:24.911192 sshd[6343]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:24.919640 systemd[1]: sshd@12-172.31.25.63:22-147.75.109.163:34972.service: Deactivated successfully. Apr 30 00:46:24.925023 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:46:24.926673 systemd-logind[1995]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:46:24.930715 systemd-logind[1995]: Removed session 13. Apr 30 00:46:26.540216 kubelet[3213]: I0430 00:46:26.538370 3213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:46:29.963655 systemd[1]: Started sshd@13-172.31.25.63:22-147.75.109.163:52954.service - OpenSSH per-connection server daemon (147.75.109.163:52954). Apr 30 00:46:30.247364 sshd[6375]: Accepted publickey for core from 147.75.109.163 port 52954 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:30.252661 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:30.263601 systemd-logind[1995]: New session 14 of user core. Apr 30 00:46:30.273465 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:46:30.611842 sshd[6375]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:30.621607 systemd[1]: sshd@13-172.31.25.63:22-147.75.109.163:52954.service: Deactivated successfully. Apr 30 00:46:30.628599 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:46:30.632934 systemd-logind[1995]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:46:30.634977 systemd-logind[1995]: Removed session 14. Apr 30 00:46:35.669622 systemd[1]: Started sshd@14-172.31.25.63:22-147.75.109.163:52958.service - OpenSSH per-connection server daemon (147.75.109.163:52958). Apr 30 00:46:35.933062 sshd[6388]: Accepted publickey for core from 147.75.109.163 port 52958 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:35.935358 sshd[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:35.952196 systemd-logind[1995]: New session 15 of user core. Apr 30 00:46:36.018443 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:46:36.400886 sshd[6388]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:36.410872 systemd[1]: sshd@14-172.31.25.63:22-147.75.109.163:52958.service: Deactivated successfully. Apr 30 00:46:36.418971 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:46:36.421756 systemd-logind[1995]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:46:36.424631 systemd-logind[1995]: Removed session 15. Apr 30 00:46:41.456914 systemd[1]: Started sshd@15-172.31.25.63:22-147.75.109.163:46644.service - OpenSSH per-connection server daemon (147.75.109.163:46644). Apr 30 00:46:41.736069 sshd[6428]: Accepted publickey for core from 147.75.109.163 port 46644 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:41.739071 sshd[6428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:41.747610 systemd-logind[1995]: New session 16 of user core. Apr 30 00:46:41.757467 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:46:42.056433 sshd[6428]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:42.063632 systemd-logind[1995]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:46:42.066047 systemd[1]: sshd@15-172.31.25.63:22-147.75.109.163:46644.service: Deactivated successfully. Apr 30 00:46:42.072688 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:46:42.075453 systemd-logind[1995]: Removed session 16. Apr 30 00:46:42.106643 systemd[1]: Started sshd@16-172.31.25.63:22-147.75.109.163:46646.service - OpenSSH per-connection server daemon (147.75.109.163:46646). Apr 30 00:46:42.372749 sshd[6444]: Accepted publickey for core from 147.75.109.163 port 46646 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:42.375607 sshd[6444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:42.384458 systemd-logind[1995]: New session 17 of user core. Apr 30 00:46:42.390425 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:46:42.952158 sshd[6444]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:42.958762 systemd[1]: sshd@16-172.31.25.63:22-147.75.109.163:46646.service: Deactivated successfully. Apr 30 00:46:42.964478 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:46:42.965906 systemd-logind[1995]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:46:42.968792 systemd-logind[1995]: Removed session 17. Apr 30 00:46:43.005693 systemd[1]: Started sshd@17-172.31.25.63:22-147.75.109.163:46660.service - OpenSSH per-connection server daemon (147.75.109.163:46660). Apr 30 00:46:43.285425 sshd[6457]: Accepted publickey for core from 147.75.109.163 port 46660 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:43.287686 sshd[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:43.296959 systemd-logind[1995]: New session 18 of user core. Apr 30 00:46:43.302405 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:46:44.705923 sshd[6457]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:44.720434 systemd[1]: sshd@17-172.31.25.63:22-147.75.109.163:46660.service: Deactivated successfully. Apr 30 00:46:44.730646 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:46:44.732348 systemd-logind[1995]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:46:44.735832 systemd-logind[1995]: Removed session 18. Apr 30 00:46:44.763913 systemd[1]: Started sshd@18-172.31.25.63:22-147.75.109.163:46674.service - OpenSSH per-connection server daemon (147.75.109.163:46674). Apr 30 00:46:45.035496 sshd[6478]: Accepted publickey for core from 147.75.109.163 port 46674 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:45.038745 sshd[6478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:45.047256 systemd-logind[1995]: New session 19 of user core. Apr 30 00:46:45.057431 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:46:45.657896 sshd[6478]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:45.666771 systemd[1]: sshd@18-172.31.25.63:22-147.75.109.163:46674.service: Deactivated successfully. Apr 30 00:46:45.671665 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:46:45.674230 systemd-logind[1995]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:46:45.676428 systemd-logind[1995]: Removed session 19. Apr 30 00:46:45.710906 systemd[1]: Started sshd@19-172.31.25.63:22-147.75.109.163:46678.service - OpenSSH per-connection server daemon (147.75.109.163:46678). Apr 30 00:46:45.980095 sshd[6490]: Accepted publickey for core from 147.75.109.163 port 46678 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:45.982371 sshd[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:45.990659 systemd-logind[1995]: New session 20 of user core. Apr 30 00:46:45.998358 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:46:46.295018 sshd[6490]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:46.304713 systemd[1]: sshd@19-172.31.25.63:22-147.75.109.163:46678.service: Deactivated successfully. Apr 30 00:46:46.309728 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:46:46.313782 systemd-logind[1995]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:46:46.321653 systemd-logind[1995]: Removed session 20. Apr 30 00:46:51.355493 systemd[1]: Started sshd@20-172.31.25.63:22-147.75.109.163:58382.service - OpenSSH per-connection server daemon (147.75.109.163:58382). Apr 30 00:46:51.623206 sshd[6503]: Accepted publickey for core from 147.75.109.163 port 58382 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:51.625883 sshd[6503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:51.635685 systemd-logind[1995]: New session 21 of user core. Apr 30 00:46:51.638420 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:46:51.940296 sshd[6503]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:51.949062 systemd[1]: sshd@20-172.31.25.63:22-147.75.109.163:58382.service: Deactivated successfully. Apr 30 00:46:51.954993 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:46:51.956852 systemd-logind[1995]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:46:51.958743 systemd-logind[1995]: Removed session 21. Apr 30 00:46:56.998630 systemd[1]: Started sshd@21-172.31.25.63:22-147.75.109.163:45058.service - OpenSSH per-connection server daemon (147.75.109.163:45058). Apr 30 00:46:57.271676 sshd[6543]: Accepted publickey for core from 147.75.109.163 port 45058 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:57.275382 sshd[6543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:57.286413 systemd-logind[1995]: New session 22 of user core. Apr 30 00:46:57.295395 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:46:57.587034 sshd[6543]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:57.593699 systemd[1]: sshd@21-172.31.25.63:22-147.75.109.163:45058.service: Deactivated successfully. Apr 30 00:46:57.594603 systemd-logind[1995]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:46:57.598373 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:46:57.604498 systemd-logind[1995]: Removed session 22. Apr 30 00:47:00.764804 update_engine[1996]: I20250430 00:47:00.764704 1996 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 00:47:00.764804 update_engine[1996]: I20250430 00:47:00.764794 1996 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 00:47:00.766445 update_engine[1996]: I20250430 00:47:00.765183 1996 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 00:47:00.766445 update_engine[1996]: I20250430 00:47:00.766355 1996 omaha_request_params.cc:62] Current group set to lts Apr 30 00:47:00.766595 update_engine[1996]: I20250430 00:47:00.766530 1996 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 00:47:00.766595 update_engine[1996]: I20250430 00:47:00.766552 1996 update_attempter.cc:643] Scheduling an action processor start. Apr 30 00:47:00.766595 update_engine[1996]: I20250430 00:47:00.766585 1996 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 00:47:00.766742 update_engine[1996]: I20250430 00:47:00.766652 1996 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 00:47:00.766791 update_engine[1996]: I20250430 00:47:00.766765 1996 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 00:47:00.766859 update_engine[1996]: I20250430 00:47:00.766784 1996 omaha_request_action.cc:272] Request: Apr 30 00:47:00.766859 update_engine[1996]: Apr 30 00:47:00.766859 update_engine[1996]: Apr 30 00:47:00.766859 update_engine[1996]: Apr 30 00:47:00.766859 update_engine[1996]: Apr 30 00:47:00.766859 update_engine[1996]: Apr 30 00:47:00.766859 update_engine[1996]: Apr 30 00:47:00.766859 update_engine[1996]: Apr 30 00:47:00.766859 update_engine[1996]: Apr 30 00:47:00.766859 update_engine[1996]: I20250430 00:47:00.766803 1996 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:47:00.769167 locksmithd[2032]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 00:47:00.772877 update_engine[1996]: I20250430 00:47:00.772310 1996 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:47:00.773195 update_engine[1996]: I20250430 00:47:00.772969 1996 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:47:00.783214 update_engine[1996]: E20250430 00:47:00.782291 1996 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:47:00.783214 update_engine[1996]: I20250430 00:47:00.782799 1996 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 00:47:02.644883 systemd[1]: Started sshd@22-172.31.25.63:22-147.75.109.163:45066.service - OpenSSH per-connection server daemon (147.75.109.163:45066). Apr 30 00:47:02.910506 sshd[6555]: Accepted publickey for core from 147.75.109.163 port 45066 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:47:02.913621 sshd[6555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:02.922664 systemd-logind[1995]: New session 23 of user core. Apr 30 00:47:02.929438 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:47:03.268583 sshd[6555]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:03.283647 systemd[1]: sshd@22-172.31.25.63:22-147.75.109.163:45066.service: Deactivated successfully. Apr 30 00:47:03.299747 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:47:03.311967 systemd-logind[1995]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:47:03.315661 systemd-logind[1995]: Removed session 23. Apr 30 00:47:08.329637 systemd[1]: Started sshd@23-172.31.25.63:22-147.75.109.163:57202.service - OpenSSH per-connection server daemon (147.75.109.163:57202). Apr 30 00:47:08.598369 sshd[6568]: Accepted publickey for core from 147.75.109.163 port 57202 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:47:08.600864 sshd[6568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:08.609738 systemd-logind[1995]: New session 24 of user core. Apr 30 00:47:08.619396 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:47:08.908945 sshd[6568]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:08.914782 systemd[1]: sshd@23-172.31.25.63:22-147.75.109.163:57202.service: Deactivated successfully. Apr 30 00:47:08.919062 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:47:08.923343 systemd-logind[1995]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:47:08.925764 systemd-logind[1995]: Removed session 24. Apr 30 00:47:10.762959 update_engine[1996]: I20250430 00:47:10.762862 1996 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:47:10.763582 update_engine[1996]: I20250430 00:47:10.763264 1996 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:47:10.763648 update_engine[1996]: I20250430 00:47:10.763579 1996 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:47:10.765067 update_engine[1996]: E20250430 00:47:10.765007 1996 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:47:10.765205 update_engine[1996]: I20250430 00:47:10.765102 1996 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 00:47:10.984408 systemd[1]: run-containerd-runc-k8s.io-7a1634c7359a3eb2ae6031e34605106388ca778eb743f7b8b635b04f8c4c34ef-runc.s4iByh.mount: Deactivated successfully. Apr 30 00:47:13.972272 systemd[1]: Started sshd@24-172.31.25.63:22-147.75.109.163:57206.service - OpenSSH per-connection server daemon (147.75.109.163:57206). Apr 30 00:47:14.230490 sshd[6600]: Accepted publickey for core from 147.75.109.163 port 57206 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:47:14.233396 sshd[6600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:14.242524 systemd-logind[1995]: New session 25 of user core. Apr 30 00:47:14.250423 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:47:14.560481 sshd[6600]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:14.568808 systemd[1]: sshd@24-172.31.25.63:22-147.75.109.163:57206.service: Deactivated successfully. Apr 30 00:47:14.572566 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:47:14.575401 systemd-logind[1995]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:47:14.577519 systemd-logind[1995]: Removed session 25. Apr 30 00:47:19.614637 systemd[1]: Started sshd@25-172.31.25.63:22-147.75.109.163:52356.service - OpenSSH per-connection server daemon (147.75.109.163:52356). Apr 30 00:47:19.894952 sshd[6615]: Accepted publickey for core from 147.75.109.163 port 52356 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:47:19.898206 sshd[6615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:19.908211 systemd-logind[1995]: New session 26 of user core. Apr 30 00:47:19.918386 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:47:20.211498 sshd[6615]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:20.217740 systemd[1]: sshd@25-172.31.25.63:22-147.75.109.163:52356.service: Deactivated successfully. Apr 30 00:47:20.222766 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:47:20.224904 systemd-logind[1995]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:47:20.228533 systemd-logind[1995]: Removed session 26. Apr 30 00:47:20.762211 update_engine[1996]: I20250430 00:47:20.762102 1996 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:47:20.762835 update_engine[1996]: I20250430 00:47:20.762480 1996 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:47:20.762835 update_engine[1996]: I20250430 00:47:20.762773 1996 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:47:20.763279 update_engine[1996]: E20250430 00:47:20.763236 1996 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:47:20.763366 update_engine[1996]: I20250430 00:47:20.763324 1996 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 00:47:28.458280 systemd[1]: run-containerd-runc-k8s.io-7a1634c7359a3eb2ae6031e34605106388ca778eb743f7b8b635b04f8c4c34ef-runc.FVmAi4.mount: Deactivated successfully. Apr 30 00:47:30.761879 update_engine[1996]: I20250430 00:47:30.761725 1996 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:47:30.762452 update_engine[1996]: I20250430 00:47:30.762076 1996 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:47:30.762560 update_engine[1996]: I20250430 00:47:30.762485 1996 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:47:30.763025 update_engine[1996]: E20250430 00:47:30.762965 1996 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:47:30.763656 update_engine[1996]: I20250430 00:47:30.763056 1996 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 00:47:30.763656 update_engine[1996]: I20250430 00:47:30.763079 1996 omaha_request_action.cc:617] Omaha request response: Apr 30 00:47:30.763829 update_engine[1996]: E20250430 00:47:30.763782 1996 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 00:47:30.763896 update_engine[1996]: I20250430 00:47:30.763837 1996 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 00:47:30.763896 update_engine[1996]: I20250430 00:47:30.763859 1996 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 00:47:30.763896 update_engine[1996]: I20250430 00:47:30.763875 1996 update_attempter.cc:306] Processing Done. Apr 30 00:47:30.764045 update_engine[1996]: E20250430 00:47:30.763901 1996 update_attempter.cc:619] Update failed. Apr 30 00:47:30.764045 update_engine[1996]: I20250430 00:47:30.763919 1996 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 00:47:30.764045 update_engine[1996]: I20250430 00:47:30.763936 1996 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 00:47:30.764045 update_engine[1996]: I20250430 00:47:30.763952 1996 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 00:47:30.764313 update_engine[1996]: I20250430 00:47:30.764065 1996 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 00:47:30.764313 update_engine[1996]: I20250430 00:47:30.764143 1996 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 00:47:30.764313 update_engine[1996]: I20250430 00:47:30.764165 1996 omaha_request_action.cc:272] Request: Apr 30 00:47:30.764313 update_engine[1996]: Apr 30 00:47:30.764313 update_engine[1996]: Apr 30 00:47:30.764313 update_engine[1996]: Apr 30 00:47:30.764313 update_engine[1996]: Apr 30 00:47:30.764313 update_engine[1996]: Apr 30 00:47:30.764313 update_engine[1996]: Apr 30 00:47:30.764313 update_engine[1996]: I20250430 00:47:30.764182 1996 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:47:30.764783 update_engine[1996]: I20250430 00:47:30.764601 1996 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:47:30.765161 update_engine[1996]: I20250430 00:47:30.764987 1996 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:47:30.765516 locksmithd[2032]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 00:47:30.766409 update_engine[1996]: E20250430 00:47:30.765774 1996 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:47:30.766409 update_engine[1996]: I20250430 00:47:30.765857 1996 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 00:47:30.766409 update_engine[1996]: I20250430 00:47:30.765877 1996 omaha_request_action.cc:617] Omaha request response: Apr 30 00:47:30.766409 update_engine[1996]: I20250430 00:47:30.765893 1996 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 00:47:30.766409 update_engine[1996]: I20250430 00:47:30.765909 1996 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 00:47:30.766409 update_engine[1996]: I20250430 00:47:30.765924 1996 update_attempter.cc:306] Processing Done. Apr 30 00:47:30.766409 update_engine[1996]: I20250430 00:47:30.765940 1996 update_attempter.cc:310] Error event sent. Apr 30 00:47:30.766409 update_engine[1996]: I20250430 00:47:30.765960 1996 update_check_scheduler.cc:74] Next update check in 40m30s Apr 30 00:47:30.766803 locksmithd[2032]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 00:47:34.109851 systemd[1]: cri-containerd-1ca3e46c80b18089779a268188544de286bfb8b5eac893898cb70e19365b578f.scope: Deactivated successfully. Apr 30 00:47:34.110351 systemd[1]: cri-containerd-1ca3e46c80b18089779a268188544de286bfb8b5eac893898cb70e19365b578f.scope: Consumed 7.159s CPU time. Apr 30 00:47:34.147744 containerd[2017]: time="2025-04-30T00:47:34.147080076Z" level=info msg="shim disconnected" id=1ca3e46c80b18089779a268188544de286bfb8b5eac893898cb70e19365b578f namespace=k8s.io Apr 30 00:47:34.147744 containerd[2017]: time="2025-04-30T00:47:34.147200760Z" level=warning msg="cleaning up after shim disconnected" id=1ca3e46c80b18089779a268188544de286bfb8b5eac893898cb70e19365b578f namespace=k8s.io Apr 30 00:47:34.147744 containerd[2017]: time="2025-04-30T00:47:34.147223872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:34.151968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ca3e46c80b18089779a268188544de286bfb8b5eac893898cb70e19365b578f-rootfs.mount: Deactivated successfully. Apr 30 00:47:34.763047 systemd[1]: cri-containerd-f494c4ab1e95d4f0f5f51f9e4a64b1a7f6668b2b53f3dd20ca6f24e19325fc5a.scope: Deactivated successfully. Apr 30 00:47:34.763584 systemd[1]: cri-containerd-f494c4ab1e95d4f0f5f51f9e4a64b1a7f6668b2b53f3dd20ca6f24e19325fc5a.scope: Consumed 4.812s CPU time, 17.6M memory peak, 0B memory swap peak. Apr 30 00:47:34.811551 containerd[2017]: time="2025-04-30T00:47:34.811181512Z" level=info msg="shim disconnected" id=f494c4ab1e95d4f0f5f51f9e4a64b1a7f6668b2b53f3dd20ca6f24e19325fc5a namespace=k8s.io Apr 30 00:47:34.811551 containerd[2017]: time="2025-04-30T00:47:34.811262644Z" level=warning msg="cleaning up after shim disconnected" id=f494c4ab1e95d4f0f5f51f9e4a64b1a7f6668b2b53f3dd20ca6f24e19325fc5a namespace=k8s.io Apr 30 00:47:34.811551 containerd[2017]: time="2025-04-30T00:47:34.811282396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:34.818807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f494c4ab1e95d4f0f5f51f9e4a64b1a7f6668b2b53f3dd20ca6f24e19325fc5a-rootfs.mount: Deactivated successfully. Apr 30 00:47:34.833394 containerd[2017]: time="2025-04-30T00:47:34.833329492Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:47:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:47:35.221477 kubelet[3213]: I0430 00:47:35.221427 3213 scope.go:117] "RemoveContainer" containerID="1ca3e46c80b18089779a268188544de286bfb8b5eac893898cb70e19365b578f" Apr 30 00:47:35.226145 kubelet[3213]: I0430 00:47:35.226047 3213 scope.go:117] "RemoveContainer" containerID="f494c4ab1e95d4f0f5f51f9e4a64b1a7f6668b2b53f3dd20ca6f24e19325fc5a" Apr 30 00:47:35.227942 containerd[2017]: time="2025-04-30T00:47:35.227703362Z" level=info msg="CreateContainer within sandbox \"91157bb16cf9f4419ccd3d7ac1454a06f7ef9e32bd56570b3ef80075b08111e7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 30 00:47:35.229942 containerd[2017]: time="2025-04-30T00:47:35.229762802Z" level=info msg="CreateContainer within sandbox \"82766026e428b9653d879675543c08864ccca3dccc73c675e579408b16fcee88\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 00:47:35.263669 containerd[2017]: time="2025-04-30T00:47:35.263494502Z" level=info msg="CreateContainer within sandbox \"91157bb16cf9f4419ccd3d7ac1454a06f7ef9e32bd56570b3ef80075b08111e7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f9e052d9c01a83b707c4fc8333d30a99db1050452ada34c703c9accac0207d84\"" Apr 30 00:47:35.264713 containerd[2017]: time="2025-04-30T00:47:35.264668918Z" level=info msg="StartContainer for \"f9e052d9c01a83b707c4fc8333d30a99db1050452ada34c703c9accac0207d84\"" Apr 30 00:47:35.272830 containerd[2017]: time="2025-04-30T00:47:35.272657546Z" level=info msg="CreateContainer within sandbox \"82766026e428b9653d879675543c08864ccca3dccc73c675e579408b16fcee88\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0a976c0e4467a937cd6094bd51e66f84a1b31ba6f1c76c136d649a39f99eb5cd\"" Apr 30 00:47:35.274441 containerd[2017]: time="2025-04-30T00:47:35.274186766Z" level=info msg="StartContainer for \"0a976c0e4467a937cd6094bd51e66f84a1b31ba6f1c76c136d649a39f99eb5cd\"" Apr 30 00:47:35.350093 systemd[1]: Started cri-containerd-f9e052d9c01a83b707c4fc8333d30a99db1050452ada34c703c9accac0207d84.scope - libcontainer container f9e052d9c01a83b707c4fc8333d30a99db1050452ada34c703c9accac0207d84. Apr 30 00:47:35.363466 systemd[1]: Started cri-containerd-0a976c0e4467a937cd6094bd51e66f84a1b31ba6f1c76c136d649a39f99eb5cd.scope - libcontainer container 0a976c0e4467a937cd6094bd51e66f84a1b31ba6f1c76c136d649a39f99eb5cd. Apr 30 00:47:35.439162 containerd[2017]: time="2025-04-30T00:47:35.438497331Z" level=info msg="StartContainer for \"f9e052d9c01a83b707c4fc8333d30a99db1050452ada34c703c9accac0207d84\" returns successfully" Apr 30 00:47:35.480526 containerd[2017]: time="2025-04-30T00:47:35.480297963Z" level=info msg="StartContainer for \"0a976c0e4467a937cd6094bd51e66f84a1b31ba6f1c76c136d649a39f99eb5cd\" returns successfully" Apr 30 00:47:37.820044 kubelet[3213]: E0430 00:47:37.819712 3213 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-63?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 30 00:47:39.559989 systemd[1]: cri-containerd-a0eaaa463f727e45098ec31cd6b43f35184f09813258b6814a49d20e747be6e0.scope: Deactivated successfully. Apr 30 00:47:39.560551 systemd[1]: cri-containerd-a0eaaa463f727e45098ec31cd6b43f35184f09813258b6814a49d20e747be6e0.scope: Consumed 3.758s CPU time, 15.6M memory peak, 0B memory swap peak. Apr 30 00:47:39.606739 containerd[2017]: time="2025-04-30T00:47:39.603455863Z" level=info msg="shim disconnected" id=a0eaaa463f727e45098ec31cd6b43f35184f09813258b6814a49d20e747be6e0 namespace=k8s.io Apr 30 00:47:39.606739 containerd[2017]: time="2025-04-30T00:47:39.603560359Z" level=warning msg="cleaning up after shim disconnected" id=a0eaaa463f727e45098ec31cd6b43f35184f09813258b6814a49d20e747be6e0 namespace=k8s.io Apr 30 00:47:39.606739 containerd[2017]: time="2025-04-30T00:47:39.603619687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:39.605682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0eaaa463f727e45098ec31cd6b43f35184f09813258b6814a49d20e747be6e0-rootfs.mount: Deactivated successfully. Apr 30 00:47:40.249270 kubelet[3213]: I0430 00:47:40.249211 3213 scope.go:117] "RemoveContainer" containerID="a0eaaa463f727e45098ec31cd6b43f35184f09813258b6814a49d20e747be6e0" Apr 30 00:47:40.252972 containerd[2017]: time="2025-04-30T00:47:40.252916243Z" level=info msg="CreateContainer within sandbox \"fe29529214778a74f359f8438b0f0c6529dfad457dc20e172d1136a62b3d2bd0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 00:47:40.276959 containerd[2017]: time="2025-04-30T00:47:40.276617263Z" level=info msg="CreateContainer within sandbox \"fe29529214778a74f359f8438b0f0c6529dfad457dc20e172d1136a62b3d2bd0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2c58b5b58cb508d3e769231acf44f4472143d3334297dc2e85a3573060ee6705\"" Apr 30 00:47:40.279328 containerd[2017]: time="2025-04-30T00:47:40.277456075Z" level=info msg="StartContainer for \"2c58b5b58cb508d3e769231acf44f4472143d3334297dc2e85a3573060ee6705\"" Apr 30 00:47:40.339448 systemd[1]: Started cri-containerd-2c58b5b58cb508d3e769231acf44f4472143d3334297dc2e85a3573060ee6705.scope - libcontainer container 2c58b5b58cb508d3e769231acf44f4472143d3334297dc2e85a3573060ee6705. Apr 30 00:47:40.406440 containerd[2017]: time="2025-04-30T00:47:40.406367551Z" level=info msg="StartContainer for \"2c58b5b58cb508d3e769231acf44f4472143d3334297dc2e85a3573060ee6705\" returns successfully" Apr 30 00:47:47.821059 kubelet[3213]: E0430 00:47:47.820679 3213 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-63?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 30 00:47:52.681379 systemd[1]: run-containerd-runc-k8s.io-9848c2eecaa5be551c2c05daefe5eab236f59d656b0970bc0740eb475ecfaf5e-runc.uik9jb.mount: Deactivated successfully.