Jul 15 04:40:06.111699 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 15 04:40:06.111742 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 03:28:41 -00 2025 Jul 15 04:40:06.111766 kernel: KASLR disabled due to lack of seed Jul 15 04:40:06.111782 kernel: efi: EFI v2.7 by EDK II Jul 15 04:40:06.111797 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Jul 15 04:40:06.111812 kernel: secureboot: Secure boot disabled Jul 15 04:40:06.111829 kernel: ACPI: Early table checksum verification disabled Jul 15 04:40:06.111844 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 15 04:40:06.111859 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 15 04:40:06.111874 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 15 04:40:06.111889 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 15 04:40:06.111908 kernel: ACPI: FACS 0x0000000078630000 000040 Jul 15 04:40:06.111923 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 15 04:40:06.111938 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 15 04:40:06.111956 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 15 04:40:06.111971 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 15 04:40:06.111991 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 15 04:40:06.112008 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 15 04:40:06.112023 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 15 04:40:06.112039 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 15 04:40:06.112055 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 15 04:40:06.112070 kernel: printk: legacy bootconsole [uart0] enabled Jul 15 04:40:06.112086 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 04:40:06.112102 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 15 04:40:06.112118 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Jul 15 04:40:06.112134 kernel: Zone ranges: Jul 15 04:40:06.112150 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 15 04:40:06.112169 kernel: DMA32 empty Jul 15 04:40:06.112185 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 15 04:40:06.112200 kernel: Device empty Jul 15 04:40:06.112215 kernel: Movable zone start for each node Jul 15 04:40:06.112231 kernel: Early memory node ranges Jul 15 04:40:06.112246 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 15 04:40:06.112262 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 15 04:40:06.112278 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 15 04:40:06.112293 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 15 04:40:06.112309 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 15 04:40:06.112325 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 15 04:40:06.112340 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 15 04:40:06.112360 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 15 04:40:06.112382 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 15 04:40:06.112399 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 15 04:40:06.112416 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jul 15 04:40:06.112432 kernel: psci: probing for conduit method from ACPI. Jul 15 04:40:06.112473 kernel: psci: PSCIv1.0 detected in firmware. Jul 15 04:40:06.112493 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 04:40:06.112510 kernel: psci: Trusted OS migration not required Jul 15 04:40:06.112526 kernel: psci: SMC Calling Convention v1.1 Jul 15 04:40:06.112543 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 15 04:40:06.112560 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 04:40:06.112576 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 04:40:06.112593 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 15 04:40:06.112610 kernel: Detected PIPT I-cache on CPU0 Jul 15 04:40:06.115131 kernel: CPU features: detected: GIC system register CPU interface Jul 15 04:40:06.115158 kernel: CPU features: detected: Spectre-v2 Jul 15 04:40:06.115185 kernel: CPU features: detected: Spectre-v3a Jul 15 04:40:06.115202 kernel: CPU features: detected: Spectre-BHB Jul 15 04:40:06.115220 kernel: CPU features: detected: ARM erratum 1742098 Jul 15 04:40:06.115236 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 15 04:40:06.115253 kernel: alternatives: applying boot alternatives Jul 15 04:40:06.115272 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:40:06.115290 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 04:40:06.115309 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 04:40:06.115326 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 04:40:06.115343 kernel: Fallback order for Node 0: 0 Jul 15 04:40:06.115363 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jul 15 04:40:06.115381 kernel: Policy zone: Normal Jul 15 04:40:06.115398 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 04:40:06.115414 kernel: software IO TLB: area num 2. Jul 15 04:40:06.115431 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Jul 15 04:40:06.115448 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 04:40:06.115464 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 04:40:06.115482 kernel: rcu: RCU event tracing is enabled. Jul 15 04:40:06.115499 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 04:40:06.115516 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 04:40:06.115533 kernel: Tracing variant of Tasks RCU enabled. Jul 15 04:40:06.115550 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 04:40:06.115571 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 04:40:06.115588 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 04:40:06.115605 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 04:40:06.115670 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 04:40:06.115694 kernel: GICv3: 96 SPIs implemented Jul 15 04:40:06.115712 kernel: GICv3: 0 Extended SPIs implemented Jul 15 04:40:06.115729 kernel: Root IRQ handler: gic_handle_irq Jul 15 04:40:06.115746 kernel: GICv3: GICv3 features: 16 PPIs Jul 15 04:40:06.115763 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 04:40:06.115781 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 15 04:40:06.115798 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 15 04:40:06.115815 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jul 15 04:40:06.115838 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jul 15 04:40:06.115856 kernel: GICv3: using LPI property table @0x0000000400110000 Jul 15 04:40:06.115872 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 15 04:40:06.115890 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jul 15 04:40:06.115907 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 04:40:06.115924 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 15 04:40:06.115942 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 15 04:40:06.115959 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 15 04:40:06.115976 kernel: Console: colour dummy device 80x25 Jul 15 04:40:06.115994 kernel: printk: legacy console [tty1] enabled Jul 15 04:40:06.116011 kernel: ACPI: Core revision 20240827 Jul 15 04:40:06.116033 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 15 04:40:06.116050 kernel: pid_max: default: 32768 minimum: 301 Jul 15 04:40:06.116067 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 04:40:06.116085 kernel: landlock: Up and running. Jul 15 04:40:06.116102 kernel: SELinux: Initializing. Jul 15 04:40:06.116119 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:40:06.116137 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:40:06.116154 kernel: rcu: Hierarchical SRCU implementation. Jul 15 04:40:06.116171 kernel: rcu: Max phase no-delay instances is 400. Jul 15 04:40:06.116193 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 04:40:06.116210 kernel: Remapping and enabling EFI services. Jul 15 04:40:06.116226 kernel: smp: Bringing up secondary CPUs ... Jul 15 04:40:06.116243 kernel: Detected PIPT I-cache on CPU1 Jul 15 04:40:06.116261 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 15 04:40:06.116278 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jul 15 04:40:06.116295 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 15 04:40:06.116312 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 04:40:06.116330 kernel: SMP: Total of 2 processors activated. Jul 15 04:40:06.116361 kernel: CPU: All CPU(s) started at EL1 Jul 15 04:40:06.116379 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 04:40:06.116401 kernel: CPU features: detected: 32-bit EL1 Support Jul 15 04:40:06.116418 kernel: CPU features: detected: CRC32 instructions Jul 15 04:40:06.116436 kernel: alternatives: applying system-wide alternatives Jul 15 04:40:06.116477 kernel: Memory: 3796580K/4030464K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 212536K reserved, 16384K cma-reserved) Jul 15 04:40:06.116499 kernel: devtmpfs: initialized Jul 15 04:40:06.116522 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 04:40:06.116540 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 04:40:06.116558 kernel: 16928 pages in range for non-PLT usage Jul 15 04:40:06.116576 kernel: 508448 pages in range for PLT usage Jul 15 04:40:06.116594 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 04:40:06.116611 kernel: SMBIOS 3.0.0 present. Jul 15 04:40:06.116657 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 15 04:40:06.116676 kernel: DMI: Memory slots populated: 0/0 Jul 15 04:40:06.116694 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 04:40:06.116717 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 04:40:06.116736 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 04:40:06.116754 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 04:40:06.116771 kernel: audit: initializing netlink subsys (disabled) Jul 15 04:40:06.116789 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Jul 15 04:40:06.116807 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 04:40:06.116824 kernel: cpuidle: using governor menu Jul 15 04:40:06.116842 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 04:40:06.116859 kernel: ASID allocator initialised with 65536 entries Jul 15 04:40:06.116881 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 04:40:06.116899 kernel: Serial: AMBA PL011 UART driver Jul 15 04:40:06.116919 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 04:40:06.116938 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 04:40:06.116957 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 04:40:06.116976 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 04:40:06.116996 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 04:40:06.117015 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 04:40:06.117033 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 04:40:06.117056 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 04:40:06.117074 kernel: ACPI: Added _OSI(Module Device) Jul 15 04:40:06.117093 kernel: ACPI: Added _OSI(Processor Device) Jul 15 04:40:06.117111 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 04:40:06.117129 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 04:40:06.117147 kernel: ACPI: Interpreter enabled Jul 15 04:40:06.117165 kernel: ACPI: Using GIC for interrupt routing Jul 15 04:40:06.117183 kernel: ACPI: MCFG table detected, 1 entries Jul 15 04:40:06.117201 kernel: ACPI: CPU0 has been hot-added Jul 15 04:40:06.117223 kernel: ACPI: CPU1 has been hot-added Jul 15 04:40:06.117242 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 15 04:40:06.117551 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 04:40:06.117802 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 04:40:06.117991 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 04:40:06.118175 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 15 04:40:06.118358 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 15 04:40:06.118389 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 15 04:40:06.118407 kernel: acpiphp: Slot [1] registered Jul 15 04:40:06.118425 kernel: acpiphp: Slot [2] registered Jul 15 04:40:06.118442 kernel: acpiphp: Slot [3] registered Jul 15 04:40:06.118460 kernel: acpiphp: Slot [4] registered Jul 15 04:40:06.118477 kernel: acpiphp: Slot [5] registered Jul 15 04:40:06.118494 kernel: acpiphp: Slot [6] registered Jul 15 04:40:06.118512 kernel: acpiphp: Slot [7] registered Jul 15 04:40:06.118529 kernel: acpiphp: Slot [8] registered Jul 15 04:40:06.118546 kernel: acpiphp: Slot [9] registered Jul 15 04:40:06.118567 kernel: acpiphp: Slot [10] registered Jul 15 04:40:06.118585 kernel: acpiphp: Slot [11] registered Jul 15 04:40:06.118602 kernel: acpiphp: Slot [12] registered Jul 15 04:40:06.118637 kernel: acpiphp: Slot [13] registered Jul 15 04:40:06.118688 kernel: acpiphp: Slot [14] registered Jul 15 04:40:06.118707 kernel: acpiphp: Slot [15] registered Jul 15 04:40:06.118725 kernel: acpiphp: Slot [16] registered Jul 15 04:40:06.118743 kernel: acpiphp: Slot [17] registered Jul 15 04:40:06.118761 kernel: acpiphp: Slot [18] registered Jul 15 04:40:06.118785 kernel: acpiphp: Slot [19] registered Jul 15 04:40:06.118803 kernel: acpiphp: Slot [20] registered Jul 15 04:40:06.118820 kernel: acpiphp: Slot [21] registered Jul 15 04:40:06.118838 kernel: acpiphp: Slot [22] registered Jul 15 04:40:06.118856 kernel: acpiphp: Slot [23] registered Jul 15 04:40:06.118874 kernel: acpiphp: Slot [24] registered Jul 15 04:40:06.118891 kernel: acpiphp: Slot [25] registered Jul 15 04:40:06.118909 kernel: acpiphp: Slot [26] registered Jul 15 04:40:06.118926 kernel: acpiphp: Slot [27] registered Jul 15 04:40:06.118944 kernel: acpiphp: Slot [28] registered Jul 15 04:40:06.118966 kernel: acpiphp: Slot [29] registered Jul 15 04:40:06.118984 kernel: acpiphp: Slot [30] registered Jul 15 04:40:06.119001 kernel: acpiphp: Slot [31] registered Jul 15 04:40:06.119019 kernel: PCI host bridge to bus 0000:00 Jul 15 04:40:06.119233 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 15 04:40:06.119407 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 04:40:06.119578 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 15 04:40:06.120329 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 15 04:40:06.121120 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jul 15 04:40:06.121358 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jul 15 04:40:06.121549 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jul 15 04:40:06.121873 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jul 15 04:40:06.123747 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jul 15 04:40:06.123998 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 15 04:40:06.124233 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jul 15 04:40:06.124432 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jul 15 04:40:06.125726 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jul 15 04:40:06.125954 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jul 15 04:40:06.126140 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 15 04:40:06.126325 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Jul 15 04:40:06.126510 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Jul 15 04:40:06.126749 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Jul 15 04:40:06.126936 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Jul 15 04:40:06.127128 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Jul 15 04:40:06.127300 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 15 04:40:06.127464 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 04:40:06.129784 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 15 04:40:06.129831 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 04:40:06.129860 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 04:40:06.129879 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 04:40:06.129897 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 04:40:06.129915 kernel: iommu: Default domain type: Translated Jul 15 04:40:06.129933 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 04:40:06.129951 kernel: efivars: Registered efivars operations Jul 15 04:40:06.129968 kernel: vgaarb: loaded Jul 15 04:40:06.129986 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 04:40:06.130004 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 04:40:06.130026 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 04:40:06.130044 kernel: pnp: PnP ACPI init Jul 15 04:40:06.130273 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 15 04:40:06.130300 kernel: pnp: PnP ACPI: found 1 devices Jul 15 04:40:06.130318 kernel: NET: Registered PF_INET protocol family Jul 15 04:40:06.130336 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 04:40:06.130354 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 04:40:06.130372 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 04:40:06.130395 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 04:40:06.130413 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 04:40:06.130431 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 04:40:06.130449 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:40:06.130467 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:40:06.130485 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 04:40:06.130503 kernel: PCI: CLS 0 bytes, default 64 Jul 15 04:40:06.130520 kernel: kvm [1]: HYP mode not available Jul 15 04:40:06.130538 kernel: Initialise system trusted keyrings Jul 15 04:40:06.130560 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 04:40:06.130578 kernel: Key type asymmetric registered Jul 15 04:40:06.130595 kernel: Asymmetric key parser 'x509' registered Jul 15 04:40:06.130612 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 04:40:06.130654 kernel: io scheduler mq-deadline registered Jul 15 04:40:06.130673 kernel: io scheduler kyber registered Jul 15 04:40:06.130691 kernel: io scheduler bfq registered Jul 15 04:40:06.130904 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 15 04:40:06.130936 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 04:40:06.130955 kernel: ACPI: button: Power Button [PWRB] Jul 15 04:40:06.130973 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 15 04:40:06.130990 kernel: ACPI: button: Sleep Button [SLPB] Jul 15 04:40:06.131008 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 04:40:06.131027 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 15 04:40:06.131214 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 15 04:40:06.131240 kernel: printk: legacy console [ttyS0] disabled Jul 15 04:40:06.131259 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 15 04:40:06.131281 kernel: printk: legacy console [ttyS0] enabled Jul 15 04:40:06.131299 kernel: printk: legacy bootconsole [uart0] disabled Jul 15 04:40:06.131316 kernel: thunder_xcv, ver 1.0 Jul 15 04:40:06.131334 kernel: thunder_bgx, ver 1.0 Jul 15 04:40:06.131351 kernel: nicpf, ver 1.0 Jul 15 04:40:06.131368 kernel: nicvf, ver 1.0 Jul 15 04:40:06.131556 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 04:40:06.133909 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T04:40:05 UTC (1752554405) Jul 15 04:40:06.133973 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 04:40:06.133993 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jul 15 04:40:06.134011 kernel: NET: Registered PF_INET6 protocol family Jul 15 04:40:06.134029 kernel: watchdog: NMI not fully supported Jul 15 04:40:06.134048 kernel: watchdog: Hard watchdog permanently disabled Jul 15 04:40:06.134067 kernel: Segment Routing with IPv6 Jul 15 04:40:06.134085 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 04:40:06.134103 kernel: NET: Registered PF_PACKET protocol family Jul 15 04:40:06.134120 kernel: Key type dns_resolver registered Jul 15 04:40:06.134142 kernel: registered taskstats version 1 Jul 15 04:40:06.134160 kernel: Loading compiled-in X.509 certificates Jul 15 04:40:06.134178 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: b5c59c413839929aea5bd4b52ae6eaff0e245cd2' Jul 15 04:40:06.134196 kernel: Demotion targets for Node 0: null Jul 15 04:40:06.134214 kernel: Key type .fscrypt registered Jul 15 04:40:06.134231 kernel: Key type fscrypt-provisioning registered Jul 15 04:40:06.134249 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 04:40:06.134266 kernel: ima: Allocated hash algorithm: sha1 Jul 15 04:40:06.134284 kernel: ima: No architecture policies found Jul 15 04:40:06.134305 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 04:40:06.134323 kernel: clk: Disabling unused clocks Jul 15 04:40:06.134341 kernel: PM: genpd: Disabling unused power domains Jul 15 04:40:06.134359 kernel: Warning: unable to open an initial console. Jul 15 04:40:06.134377 kernel: Freeing unused kernel memory: 39424K Jul 15 04:40:06.134394 kernel: Run /init as init process Jul 15 04:40:06.134412 kernel: with arguments: Jul 15 04:40:06.134429 kernel: /init Jul 15 04:40:06.134447 kernel: with environment: Jul 15 04:40:06.134464 kernel: HOME=/ Jul 15 04:40:06.134485 kernel: TERM=linux Jul 15 04:40:06.134503 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 04:40:06.134522 systemd[1]: Successfully made /usr/ read-only. Jul 15 04:40:06.134546 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:40:06.134566 systemd[1]: Detected virtualization amazon. Jul 15 04:40:06.134585 systemd[1]: Detected architecture arm64. Jul 15 04:40:06.134603 systemd[1]: Running in initrd. Jul 15 04:40:06.135712 systemd[1]: No hostname configured, using default hostname. Jul 15 04:40:06.135743 systemd[1]: Hostname set to . Jul 15 04:40:06.135763 systemd[1]: Initializing machine ID from VM UUID. Jul 15 04:40:06.135782 systemd[1]: Queued start job for default target initrd.target. Jul 15 04:40:06.135802 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:40:06.135822 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:40:06.135842 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 04:40:06.135862 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:40:06.135890 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 04:40:06.135910 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 04:40:06.135932 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 04:40:06.135952 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 04:40:06.135972 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:40:06.135991 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:40:06.136010 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:40:06.136034 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:40:06.136053 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:40:06.136072 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:40:06.136091 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:40:06.136110 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:40:06.136130 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 04:40:06.136149 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 04:40:06.136168 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:40:06.136192 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:40:06.136211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:40:06.136230 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:40:06.136249 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 04:40:06.136269 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:40:06.136289 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 04:40:06.136308 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 04:40:06.136328 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 04:40:06.136347 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:40:06.136371 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:40:06.139675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:40:06.139735 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 04:40:06.139763 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:40:06.139793 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 04:40:06.139814 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:40:06.139881 systemd-journald[258]: Collecting audit messages is disabled. Jul 15 04:40:06.139924 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 04:40:06.139949 kernel: Bridge firewalling registered Jul 15 04:40:06.139986 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:40:06.140007 systemd-journald[258]: Journal started Jul 15 04:40:06.140045 systemd-journald[258]: Runtime Journal (/run/log/journal/ec213ddc0635a2eeb8ebd37a84b047ce) is 8M, max 75.3M, 67.3M free. Jul 15 04:40:06.088568 systemd-modules-load[259]: Inserted module 'overlay' Jul 15 04:40:06.134979 systemd-modules-load[259]: Inserted module 'br_netfilter' Jul 15 04:40:06.150196 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:40:06.153424 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:40:06.158017 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:40:06.170603 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 04:40:06.178308 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:40:06.190859 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:40:06.206137 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:40:06.228105 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:40:06.245060 systemd-tmpfiles[283]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 04:40:06.249167 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:40:06.263403 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:40:06.273809 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:40:06.285675 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:40:06.293876 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 04:40:06.328876 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:40:06.380761 systemd-resolved[298]: Positive Trust Anchors: Jul 15 04:40:06.383407 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:40:06.388754 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:40:06.478658 kernel: SCSI subsystem initialized Jul 15 04:40:06.485653 kernel: Loading iSCSI transport class v2.0-870. Jul 15 04:40:06.498683 kernel: iscsi: registered transport (tcp) Jul 15 04:40:06.520128 kernel: iscsi: registered transport (qla4xxx) Jul 15 04:40:06.520206 kernel: QLogic iSCSI HBA Driver Jul 15 04:40:06.552741 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:40:06.578608 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:40:06.590148 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:40:06.656661 kernel: random: crng init done Jul 15 04:40:06.657145 systemd-resolved[298]: Defaulting to hostname 'linux'. Jul 15 04:40:06.660719 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:40:06.672405 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:40:06.693076 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 04:40:06.700472 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 04:40:06.800683 kernel: raid6: neonx8 gen() 6520 MB/s Jul 15 04:40:06.817653 kernel: raid6: neonx4 gen() 6493 MB/s Jul 15 04:40:06.834653 kernel: raid6: neonx2 gen() 5443 MB/s Jul 15 04:40:06.851652 kernel: raid6: neonx1 gen() 3946 MB/s Jul 15 04:40:06.868652 kernel: raid6: int64x8 gen() 3624 MB/s Jul 15 04:40:06.885652 kernel: raid6: int64x4 gen() 3716 MB/s Jul 15 04:40:06.902652 kernel: raid6: int64x2 gen() 3588 MB/s Jul 15 04:40:06.920649 kernel: raid6: int64x1 gen() 2768 MB/s Jul 15 04:40:06.920686 kernel: raid6: using algorithm neonx8 gen() 6520 MB/s Jul 15 04:40:06.939623 kernel: raid6: .... xor() 4722 MB/s, rmw enabled Jul 15 04:40:06.939668 kernel: raid6: using neon recovery algorithm Jul 15 04:40:06.948197 kernel: xor: measuring software checksum speed Jul 15 04:40:06.948246 kernel: 8regs : 12891 MB/sec Jul 15 04:40:06.949395 kernel: 32regs : 13040 MB/sec Jul 15 04:40:06.951726 kernel: arm64_neon : 8520 MB/sec Jul 15 04:40:06.951765 kernel: xor: using function: 32regs (13040 MB/sec) Jul 15 04:40:07.043683 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 04:40:07.054450 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:40:07.065525 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:40:07.123840 systemd-udevd[507]: Using default interface naming scheme 'v255'. Jul 15 04:40:07.133858 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:40:07.138428 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 04:40:07.180723 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jul 15 04:40:07.222853 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:40:07.229602 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:40:07.363803 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:40:07.369960 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 04:40:07.518598 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 15 04:40:07.518692 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 15 04:40:07.525832 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 04:40:07.529531 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 15 04:40:07.538844 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 15 04:40:07.540031 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 15 04:40:07.549935 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 15 04:40:07.559672 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:4b:18:ec:03:03 Jul 15 04:40:07.563927 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 04:40:07.563992 kernel: GPT:9289727 != 16777215 Jul 15 04:40:07.564016 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 04:40:07.568647 kernel: GPT:9289727 != 16777215 Jul 15 04:40:07.568705 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 04:40:07.568730 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 04:40:07.570029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:40:07.572329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:40:07.580714 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:40:07.586512 (udev-worker)[556]: Network interface NamePolicy= disabled on kernel command line. Jul 15 04:40:07.591545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:40:07.597641 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:40:07.638901 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:40:07.654676 kernel: nvme nvme0: using unchecked data buffer Jul 15 04:40:07.775012 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 15 04:40:07.834255 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 04:40:07.861195 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 15 04:40:07.901235 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 15 04:40:07.907417 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 15 04:40:07.931947 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 04:40:07.938225 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:40:07.941369 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:40:07.949911 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:40:07.956122 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 04:40:07.962143 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 04:40:07.991144 disk-uuid[685]: Primary Header is updated. Jul 15 04:40:07.991144 disk-uuid[685]: Secondary Entries is updated. Jul 15 04:40:07.991144 disk-uuid[685]: Secondary Header is updated. Jul 15 04:40:08.001947 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:40:08.012680 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 04:40:08.020661 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 04:40:09.028664 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 04:40:09.030086 disk-uuid[687]: The operation has completed successfully. Jul 15 04:40:09.212577 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 04:40:09.213153 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 04:40:09.297250 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 04:40:09.337830 sh[952]: Success Jul 15 04:40:09.360688 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 04:40:09.360761 kernel: device-mapper: uevent: version 1.0.3 Jul 15 04:40:09.362726 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 04:40:09.373658 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 04:40:09.474496 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 04:40:09.481232 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 04:40:09.509464 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 04:40:09.540662 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 04:40:09.541707 kernel: BTRFS: device fsid a7b7592d-2d1d-4236-b04f-dc58147b4692 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (975) Jul 15 04:40:09.547848 kernel: BTRFS info (device dm-0): first mount of filesystem a7b7592d-2d1d-4236-b04f-dc58147b4692 Jul 15 04:40:09.547914 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:40:09.547941 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 04:40:09.681535 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 04:40:09.686354 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:40:09.692192 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 04:40:09.698183 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 04:40:09.705538 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 04:40:09.759687 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1007) Jul 15 04:40:09.764478 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:40:09.764549 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:40:09.766043 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 04:40:09.788769 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:40:09.791723 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 04:40:09.799175 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 04:40:09.902267 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:40:09.913563 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:40:09.999392 systemd-networkd[1145]: lo: Link UP Jul 15 04:40:09.999415 systemd-networkd[1145]: lo: Gained carrier Jul 15 04:40:10.001812 systemd-networkd[1145]: Enumeration completed Jul 15 04:40:10.002682 systemd-networkd[1145]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:40:10.002690 systemd-networkd[1145]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:40:10.003245 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:40:10.010399 systemd[1]: Reached target network.target - Network. Jul 15 04:40:10.015576 systemd-networkd[1145]: eth0: Link UP Jul 15 04:40:10.015583 systemd-networkd[1145]: eth0: Gained carrier Jul 15 04:40:10.015605 systemd-networkd[1145]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:40:10.055693 systemd-networkd[1145]: eth0: DHCPv4 address 172.31.20.207/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 04:40:10.289671 ignition[1069]: Ignition 2.21.0 Jul 15 04:40:10.289702 ignition[1069]: Stage: fetch-offline Jul 15 04:40:10.291016 ignition[1069]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:40:10.296967 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:40:10.291041 ignition[1069]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:40:10.305188 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 04:40:10.291867 ignition[1069]: Ignition finished successfully Jul 15 04:40:10.345351 ignition[1154]: Ignition 2.21.0 Jul 15 04:40:10.345935 ignition[1154]: Stage: fetch Jul 15 04:40:10.346496 ignition[1154]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:40:10.346520 ignition[1154]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:40:10.347389 ignition[1154]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:40:10.381448 ignition[1154]: PUT result: OK Jul 15 04:40:10.385353 ignition[1154]: parsed url from cmdline: "" Jul 15 04:40:10.385370 ignition[1154]: no config URL provided Jul 15 04:40:10.385384 ignition[1154]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 04:40:10.385410 ignition[1154]: no config at "/usr/lib/ignition/user.ign" Jul 15 04:40:10.385441 ignition[1154]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:40:10.394988 ignition[1154]: PUT result: OK Jul 15 04:40:10.395300 ignition[1154]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 15 04:40:10.400055 ignition[1154]: GET result: OK Jul 15 04:40:10.400508 ignition[1154]: parsing config with SHA512: 3da3cb25c4e2904708e3c9a547a0c1ec81feb5029f1a2b577a80acd4f878564cdaf7155597c86e69ea9bc668773d9754d043d3db2083f025014742bb578a8680 Jul 15 04:40:10.415164 unknown[1154]: fetched base config from "system" Jul 15 04:40:10.415186 unknown[1154]: fetched base config from "system" Jul 15 04:40:10.416313 ignition[1154]: fetch: fetch complete Jul 15 04:40:10.415198 unknown[1154]: fetched user config from "aws" Jul 15 04:40:10.416326 ignition[1154]: fetch: fetch passed Jul 15 04:40:10.416448 ignition[1154]: Ignition finished successfully Jul 15 04:40:10.428206 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 04:40:10.434329 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 04:40:10.482781 ignition[1161]: Ignition 2.21.0 Jul 15 04:40:10.482815 ignition[1161]: Stage: kargs Jul 15 04:40:10.483365 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:40:10.483393 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:40:10.483549 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:40:10.495188 ignition[1161]: PUT result: OK Jul 15 04:40:10.499791 ignition[1161]: kargs: kargs passed Jul 15 04:40:10.499883 ignition[1161]: Ignition finished successfully Jul 15 04:40:10.504549 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 04:40:10.511740 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 04:40:10.563503 ignition[1167]: Ignition 2.21.0 Jul 15 04:40:10.564049 ignition[1167]: Stage: disks Jul 15 04:40:10.564658 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:40:10.564683 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:40:10.564840 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:40:10.575716 ignition[1167]: PUT result: OK Jul 15 04:40:10.582566 ignition[1167]: disks: disks passed Jul 15 04:40:10.582887 ignition[1167]: Ignition finished successfully Jul 15 04:40:10.589335 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 04:40:10.595604 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 04:40:10.602353 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 04:40:10.608465 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:40:10.611105 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:40:10.615782 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:40:10.623686 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 04:40:10.681298 systemd-fsck[1176]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 04:40:10.685129 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 04:40:10.692179 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 04:40:10.822657 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4818953b-9d82-47bd-ab58-d0aa5641a19a r/w with ordered data mode. Quota mode: none. Jul 15 04:40:10.824951 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 04:40:10.829076 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 04:40:10.836351 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:40:10.845425 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 04:40:10.850108 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 04:40:10.850196 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 04:40:10.850246 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:40:10.874829 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 04:40:10.881674 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 04:40:10.902836 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1195) Jul 15 04:40:10.907421 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:40:10.907474 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:40:10.908786 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 04:40:10.917315 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:40:11.282984 initrd-setup-root[1219]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 04:40:11.318841 initrd-setup-root[1226]: cut: /sysroot/etc/group: No such file or directory Jul 15 04:40:11.328251 initrd-setup-root[1233]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 04:40:11.337019 initrd-setup-root[1240]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 04:40:11.358803 systemd-networkd[1145]: eth0: Gained IPv6LL Jul 15 04:40:11.655238 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 04:40:11.664832 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 04:40:11.671287 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 04:40:11.702475 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 04:40:11.709663 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:40:11.737577 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 04:40:11.757763 ignition[1308]: INFO : Ignition 2.21.0 Jul 15 04:40:11.759928 ignition[1308]: INFO : Stage: mount Jul 15 04:40:11.759928 ignition[1308]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:40:11.759928 ignition[1308]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:40:11.759928 ignition[1308]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:40:11.772462 ignition[1308]: INFO : PUT result: OK Jul 15 04:40:11.782335 ignition[1308]: INFO : mount: mount passed Jul 15 04:40:11.782335 ignition[1308]: INFO : Ignition finished successfully Jul 15 04:40:11.785678 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 04:40:11.795264 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 04:40:11.827571 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:40:11.869655 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1320) Jul 15 04:40:11.875417 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:40:11.875475 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:40:11.876796 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 04:40:11.885454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:40:11.946836 ignition[1337]: INFO : Ignition 2.21.0 Jul 15 04:40:11.949765 ignition[1337]: INFO : Stage: files Jul 15 04:40:11.949765 ignition[1337]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:40:11.949765 ignition[1337]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:40:11.949765 ignition[1337]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:40:11.961475 ignition[1337]: INFO : PUT result: OK Jul 15 04:40:11.969792 ignition[1337]: DEBUG : files: compiled without relabeling support, skipping Jul 15 04:40:11.974818 ignition[1337]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 04:40:11.974818 ignition[1337]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 04:40:11.986031 ignition[1337]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 04:40:11.990731 ignition[1337]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 04:40:11.994366 unknown[1337]: wrote ssh authorized keys file for user: core Jul 15 04:40:11.997052 ignition[1337]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 04:40:12.011200 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 15 04:40:12.015979 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 15 04:40:12.113371 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 04:40:12.254498 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 15 04:40:12.259429 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 04:40:12.263862 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 04:40:12.268197 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:40:12.272728 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:40:12.277114 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:40:12.282243 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:40:12.282243 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:40:12.282243 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:40:12.299270 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:40:12.303459 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:40:12.303459 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 04:40:12.303459 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 04:40:12.303459 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 04:40:12.303459 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 15 04:40:12.797000 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 04:40:13.196254 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 04:40:13.196254 ignition[1337]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 04:40:13.205933 ignition[1337]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:40:13.205933 ignition[1337]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:40:13.205933 ignition[1337]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 04:40:13.205933 ignition[1337]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 04:40:13.205933 ignition[1337]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 04:40:13.205933 ignition[1337]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:40:13.205933 ignition[1337]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:40:13.205933 ignition[1337]: INFO : files: files passed Jul 15 04:40:13.205933 ignition[1337]: INFO : Ignition finished successfully Jul 15 04:40:13.241847 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 04:40:13.250904 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 04:40:13.256446 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 04:40:13.287363 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 04:40:13.287687 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 04:40:13.305002 initrd-setup-root-after-ignition[1367]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:40:13.305002 initrd-setup-root-after-ignition[1367]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:40:13.317874 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:40:13.323890 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:40:13.331507 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 04:40:13.335921 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 04:40:13.420830 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 04:40:13.421038 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 04:40:13.429009 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 04:40:13.435854 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 04:40:13.438863 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 04:40:13.440539 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 04:40:13.483982 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:40:13.494724 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 04:40:13.535315 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:40:13.536206 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:40:13.536722 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 04:40:13.537510 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 04:40:13.537855 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:40:13.539096 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 04:40:13.539956 systemd[1]: Stopped target basic.target - Basic System. Jul 15 04:40:13.540788 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 04:40:13.541399 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:40:13.541746 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 04:40:13.542055 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:40:13.542441 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 04:40:13.543200 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:40:13.543600 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 04:40:13.544485 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 04:40:13.545080 systemd[1]: Stopped target swap.target - Swaps. Jul 15 04:40:13.545814 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 04:40:13.546108 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:40:13.547402 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:40:13.548327 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:40:13.549017 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 04:40:13.586175 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:40:13.592509 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 04:40:13.593195 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 04:40:13.602962 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 04:40:13.603832 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:40:13.612409 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 04:40:13.612763 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 04:40:13.627112 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 04:40:13.663978 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 04:40:13.672155 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 04:40:13.672827 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:40:13.688414 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 04:40:13.690661 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:40:13.723128 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 04:40:13.726146 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 04:40:13.753989 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 04:40:13.759663 ignition[1391]: INFO : Ignition 2.21.0 Jul 15 04:40:13.759663 ignition[1391]: INFO : Stage: umount Jul 15 04:40:13.759663 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:40:13.759663 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 04:40:13.759663 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 04:40:13.773904 ignition[1391]: INFO : PUT result: OK Jul 15 04:40:13.779177 ignition[1391]: INFO : umount: umount passed Jul 15 04:40:13.783822 ignition[1391]: INFO : Ignition finished successfully Jul 15 04:40:13.781801 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 04:40:13.783672 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 04:40:13.793654 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 04:40:13.795667 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 04:40:13.801385 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 04:40:13.801548 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 04:40:13.809936 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 04:40:13.810052 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 04:40:13.817201 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 04:40:13.817301 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 04:40:13.822227 systemd[1]: Stopped target network.target - Network. Jul 15 04:40:13.826869 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 04:40:13.826981 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:40:13.830423 systemd[1]: Stopped target paths.target - Path Units. Jul 15 04:40:13.837326 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 04:40:13.837468 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:40:13.842853 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 04:40:13.845371 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 04:40:13.853427 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 04:40:13.853517 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:40:13.856190 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 04:40:13.856258 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:40:13.859136 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 04:40:13.859235 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 04:40:13.867305 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 04:40:13.867405 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 04:40:13.870173 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 04:40:13.870291 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 04:40:13.879431 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 04:40:13.882352 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 04:40:13.915033 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 04:40:13.915240 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 04:40:13.921592 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 04:40:13.922389 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 04:40:13.922596 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 04:40:13.943692 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 04:40:13.945518 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 04:40:13.954710 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 04:40:13.954964 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:40:13.964675 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 04:40:13.970238 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 04:40:13.970530 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:40:13.979369 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 04:40:13.979480 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:40:13.988559 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 04:40:13.988704 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 04:40:13.999834 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 04:40:13.999944 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:40:14.014951 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:40:14.026805 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 04:40:14.027384 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:40:14.045236 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 04:40:14.048289 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:40:14.052937 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 04:40:14.053080 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 04:40:14.056395 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 04:40:14.056478 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:40:14.057178 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 04:40:14.057261 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:40:14.067815 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 04:40:14.067924 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 04:40:14.077266 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 04:40:14.077368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:40:14.083899 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 04:40:14.084260 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 04:40:14.084349 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:40:14.094037 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 04:40:14.094140 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:40:14.107338 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 04:40:14.107431 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:40:14.123557 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 04:40:14.123680 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:40:14.132672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:40:14.132758 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:40:14.141528 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 04:40:14.141666 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 04:40:14.141752 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 04:40:14.141835 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:40:14.142523 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 04:40:14.144669 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 04:40:14.158046 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 04:40:14.158276 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 04:40:14.162088 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 04:40:14.173557 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 04:40:14.220745 systemd[1]: Switching root. Jul 15 04:40:14.287771 systemd-journald[258]: Journal stopped Jul 15 04:40:16.884835 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Jul 15 04:40:16.884971 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 04:40:16.885014 kernel: SELinux: policy capability open_perms=1 Jul 15 04:40:16.885042 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 04:40:16.885079 kernel: SELinux: policy capability always_check_network=0 Jul 15 04:40:16.885109 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 04:40:16.885139 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 04:40:16.885168 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 04:40:16.885195 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 04:40:16.885226 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 04:40:16.885258 kernel: audit: type=1403 audit(1752554414.786:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 04:40:16.885295 systemd[1]: Successfully loaded SELinux policy in 114.341ms. Jul 15 04:40:16.885338 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.722ms. Jul 15 04:40:16.885371 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:40:16.885402 systemd[1]: Detected virtualization amazon. Jul 15 04:40:16.885429 systemd[1]: Detected architecture arm64. Jul 15 04:40:16.885458 systemd[1]: Detected first boot. Jul 15 04:40:16.885496 systemd[1]: Initializing machine ID from VM UUID. Jul 15 04:40:16.885527 zram_generator::config[1434]: No configuration found. Jul 15 04:40:16.885566 kernel: NET: Registered PF_VSOCK protocol family Jul 15 04:40:16.885596 systemd[1]: Populated /etc with preset unit settings. Jul 15 04:40:16.893952 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 04:40:16.894007 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 04:40:16.894041 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 04:40:16.894070 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 04:40:16.894100 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 04:40:16.894133 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 04:40:16.894172 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 04:40:16.894202 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 04:40:16.894230 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 04:40:16.894260 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 04:40:16.894291 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 04:40:16.894322 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 04:40:16.894353 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:40:16.894382 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:40:16.894410 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 04:40:16.894442 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 04:40:16.894472 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 04:40:16.894502 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:40:16.894530 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 04:40:16.894561 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:40:16.894591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:40:16.901599 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 04:40:16.901694 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 04:40:16.901726 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 04:40:16.901756 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 04:40:16.901790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:40:16.901821 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:40:16.901852 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:40:16.901885 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:40:16.901913 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 04:40:16.901944 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 04:40:16.901980 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 04:40:16.902010 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:40:16.902038 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:40:16.902069 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:40:16.902098 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 04:40:16.902129 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 04:40:16.902157 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 04:40:16.902185 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 04:40:16.902213 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 04:40:16.902246 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 04:40:16.902277 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 04:40:16.902308 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 04:40:16.902336 systemd[1]: Reached target machines.target - Containers. Jul 15 04:40:16.902365 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 04:40:16.902393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:40:16.902421 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:40:16.902450 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 04:40:16.902479 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:40:16.902512 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:40:16.902542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:40:16.902570 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 04:40:16.902597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:40:16.906704 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 04:40:16.906760 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 04:40:16.906790 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 04:40:16.906820 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 04:40:16.906858 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 04:40:16.906893 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:40:16.906924 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:40:16.906954 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:40:16.906983 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:40:16.907013 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 04:40:16.907042 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 04:40:16.907069 kernel: loop: module loaded Jul 15 04:40:16.907097 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:40:16.907131 kernel: fuse: init (API version 7.41) Jul 15 04:40:16.907162 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 04:40:16.907196 systemd[1]: Stopped verity-setup.service. Jul 15 04:40:16.907225 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 04:40:16.907255 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 04:40:16.907283 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 04:40:16.907311 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 04:40:16.907339 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 04:40:16.907367 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 04:40:16.907397 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:40:16.907429 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 04:40:16.907459 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 04:40:16.907490 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:40:16.907517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:40:16.907545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:40:16.907573 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:40:16.907603 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 04:40:16.907653 kernel: ACPI: bus type drm_connector registered Jul 15 04:40:16.907685 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 04:40:16.907720 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:40:16.907749 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:40:16.907777 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:40:16.907856 systemd-journald[1513]: Collecting audit messages is disabled. Jul 15 04:40:16.907910 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:40:16.907941 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:40:16.907969 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 04:40:16.907998 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 04:40:16.908032 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 04:40:16.908061 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 04:40:16.908089 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:40:16.908116 systemd-journald[1513]: Journal started Jul 15 04:40:16.908166 systemd-journald[1513]: Runtime Journal (/run/log/journal/ec213ddc0635a2eeb8ebd37a84b047ce) is 8M, max 75.3M, 67.3M free. Jul 15 04:40:16.206806 systemd[1]: Queued start job for default target multi-user.target. Jul 15 04:40:16.919610 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 04:40:16.221699 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 15 04:40:16.222512 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 04:40:16.934452 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 04:40:16.934550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:40:16.943805 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 04:40:16.952694 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:40:16.956304 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 04:40:16.964324 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:40:16.973440 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:40:16.984462 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 04:40:16.999886 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:40:17.035426 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:40:17.072819 kernel: loop0: detected capacity change from 0 to 105936 Jul 15 04:40:17.080701 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 04:40:17.088488 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:40:17.094689 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 04:40:17.103108 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 04:40:17.108998 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 04:40:17.116767 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 04:40:17.121951 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:40:17.167022 systemd-tmpfiles[1547]: ACLs are not supported, ignoring. Jul 15 04:40:17.167065 systemd-tmpfiles[1547]: ACLs are not supported, ignoring. Jul 15 04:40:17.177605 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 04:40:17.184884 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:40:17.193032 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 04:40:17.202831 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 04:40:17.210024 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:40:17.222268 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 04:40:17.256673 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 04:40:17.276771 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:40:17.284825 systemd-journald[1513]: Time spent on flushing to /var/log/journal/ec213ddc0635a2eeb8ebd37a84b047ce is 33.722ms for 942 entries. Jul 15 04:40:17.284825 systemd-journald[1513]: System Journal (/var/log/journal/ec213ddc0635a2eeb8ebd37a84b047ce) is 8M, max 195.6M, 187.6M free. Jul 15 04:40:17.343787 systemd-journald[1513]: Received client request to flush runtime journal. Jul 15 04:40:17.343873 kernel: loop1: detected capacity change from 0 to 134232 Jul 15 04:40:17.294321 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 04:40:17.296746 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 04:40:17.346077 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 04:40:17.373433 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 04:40:17.383605 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:40:17.438706 kernel: loop2: detected capacity change from 0 to 211168 Jul 15 04:40:17.443268 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. Jul 15 04:40:17.443706 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. Jul 15 04:40:17.454762 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:40:17.554072 kernel: loop3: detected capacity change from 0 to 61256 Jul 15 04:40:17.675960 kernel: loop4: detected capacity change from 0 to 105936 Jul 15 04:40:17.695666 kernel: loop5: detected capacity change from 0 to 134232 Jul 15 04:40:17.714089 kernel: loop6: detected capacity change from 0 to 211168 Jul 15 04:40:17.747659 kernel: loop7: detected capacity change from 0 to 61256 Jul 15 04:40:17.764078 (sd-merge)[1597]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 15 04:40:17.765800 (sd-merge)[1597]: Merged extensions into '/usr'. Jul 15 04:40:17.776696 systemd[1]: Reload requested from client PID 1546 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 04:40:17.776886 systemd[1]: Reloading... Jul 15 04:40:17.966649 zram_generator::config[1623]: No configuration found. Jul 15 04:40:18.273539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:40:18.467853 systemd[1]: Reloading finished in 689 ms. Jul 15 04:40:18.496583 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 04:40:18.500483 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 04:40:18.523872 systemd[1]: Starting ensure-sysext.service... Jul 15 04:40:18.530059 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:40:18.536487 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:40:18.571734 systemd[1]: Reload requested from client PID 1675 ('systemctl') (unit ensure-sysext.service)... Jul 15 04:40:18.571919 systemd[1]: Reloading... Jul 15 04:40:18.588968 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 04:40:18.590966 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 04:40:18.591906 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 04:40:18.594110 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 04:40:18.598010 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 04:40:18.598592 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Jul 15 04:40:18.603155 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Jul 15 04:40:18.624650 ldconfig[1538]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 04:40:18.622044 systemd-tmpfiles[1676]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:40:18.622058 systemd-tmpfiles[1676]: Skipping /boot Jul 15 04:40:18.651180 systemd-tmpfiles[1676]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:40:18.651214 systemd-tmpfiles[1676]: Skipping /boot Jul 15 04:40:18.681777 systemd-udevd[1677]: Using default interface naming scheme 'v255'. Jul 15 04:40:18.744686 zram_generator::config[1707]: No configuration found. Jul 15 04:40:19.057418 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:40:19.102521 (udev-worker)[1715]: Network interface NamePolicy= disabled on kernel command line. Jul 15 04:40:19.358912 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 04:40:19.360565 systemd[1]: Reloading finished in 787 ms. Jul 15 04:40:19.374960 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:40:19.386067 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 04:40:19.393244 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:40:19.498023 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:40:19.504597 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 04:40:19.509067 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:40:19.513390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:40:19.522946 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:40:19.531059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:40:19.537470 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:40:19.538877 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:40:19.543354 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 04:40:19.555080 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:40:19.564171 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:40:19.578234 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 04:40:19.596575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:40:19.601134 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:40:19.609064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:40:19.609329 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:40:19.611826 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 04:40:19.633106 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 04:40:19.640039 systemd[1]: Finished ensure-sysext.service. Jul 15 04:40:19.673527 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 04:40:19.706081 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 04:40:19.772786 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 04:40:19.788721 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 04:40:19.818156 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 04:40:19.831930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:40:19.832796 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:40:19.840004 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:40:19.841207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:40:19.850768 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:40:19.852762 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:40:19.872007 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:40:19.872513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:40:19.882254 augenrules[1908]: No rules Jul 15 04:40:19.894576 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:40:19.895028 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:40:19.898751 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 04:40:19.908212 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:40:19.908781 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:40:19.918435 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:40:20.101779 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 04:40:20.110071 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 04:40:20.134001 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 04:40:20.167949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:40:20.175853 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 04:40:20.282064 systemd-resolved[1832]: Positive Trust Anchors: Jul 15 04:40:20.282102 systemd-resolved[1832]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:40:20.282164 systemd-resolved[1832]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:40:20.286805 systemd-networkd[1831]: lo: Link UP Jul 15 04:40:20.286821 systemd-networkd[1831]: lo: Gained carrier Jul 15 04:40:20.290080 systemd-networkd[1831]: Enumeration completed Jul 15 04:40:20.291173 systemd-networkd[1831]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:40:20.291335 systemd-networkd[1831]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:40:20.291800 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:40:20.298771 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 04:40:20.298997 systemd-resolved[1832]: Defaulting to hostname 'linux'. Jul 15 04:40:20.304590 systemd-networkd[1831]: eth0: Link UP Jul 15 04:40:20.305103 systemd-networkd[1831]: eth0: Gained carrier Jul 15 04:40:20.305244 systemd-networkd[1831]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:40:20.307201 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 04:40:20.311162 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:40:20.315403 systemd[1]: Reached target network.target - Network. Jul 15 04:40:20.319998 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:40:20.323322 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:40:20.326429 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 04:40:20.329972 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 04:40:20.333514 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 04:40:20.336502 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 04:40:20.339793 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 04:40:20.343054 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 04:40:20.343114 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:40:20.345441 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:40:20.350196 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 04:40:20.357265 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 04:40:20.366759 systemd-networkd[1831]: eth0: DHCPv4 address 172.31.20.207/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 04:40:20.368057 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 04:40:20.372873 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 04:40:20.376119 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 04:40:20.387880 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 04:40:20.391798 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 04:40:20.395840 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 04:40:20.399539 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:40:20.402824 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:40:20.405354 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:40:20.405410 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:40:20.407804 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 04:40:20.413882 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 04:40:20.419874 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 04:40:20.424986 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 04:40:20.431336 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 04:40:20.436489 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 04:40:20.439473 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 04:40:20.442077 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 04:40:20.450093 systemd[1]: Started ntpd.service - Network Time Service. Jul 15 04:40:20.462855 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 04:40:20.478161 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 15 04:40:20.486303 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 04:40:20.511325 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 04:40:20.524133 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 04:40:20.528443 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 04:40:20.529333 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 04:40:20.531971 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 04:40:20.540013 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 04:40:20.558493 jq[1965]: false Jul 15 04:40:20.568043 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 04:40:20.579712 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 04:40:20.583519 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 04:40:20.583954 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 04:40:20.609759 jq[1975]: true Jul 15 04:40:20.645352 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 04:40:20.645939 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 04:40:20.664649 extend-filesystems[1966]: Found /dev/nvme0n1p6 Jul 15 04:40:20.706113 extend-filesystems[1966]: Found /dev/nvme0n1p9 Jul 15 04:40:20.728704 extend-filesystems[1966]: Checking size of /dev/nvme0n1p9 Jul 15 04:40:20.736896 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 04:40:20.741892 tar[1984]: linux-arm64/LICENSE Jul 15 04:40:20.741892 tar[1984]: linux-arm64/helm Jul 15 04:40:20.745649 coreos-metadata[1962]: Jul 15 04:40:20.742 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 04:40:20.746097 coreos-metadata[1962]: Jul 15 04:40:20.746 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 15 04:40:20.758707 coreos-metadata[1962]: Jul 15 04:40:20.747 INFO Fetch successful Jul 15 04:40:20.758707 coreos-metadata[1962]: Jul 15 04:40:20.747 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 15 04:40:20.758707 coreos-metadata[1962]: Jul 15 04:40:20.754 INFO Fetch successful Jul 15 04:40:20.758707 coreos-metadata[1962]: Jul 15 04:40:20.754 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 15 04:40:20.758707 coreos-metadata[1962]: Jul 15 04:40:20.755 INFO Fetch successful Jul 15 04:40:20.758707 coreos-metadata[1962]: Jul 15 04:40:20.755 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 15 04:40:20.758707 coreos-metadata[1962]: Jul 15 04:40:20.758 INFO Fetch successful Jul 15 04:40:20.758707 coreos-metadata[1962]: Jul 15 04:40:20.758 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 15 04:40:20.748673 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 04:40:20.756634 (ntainerd)[2000]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 04:40:20.765693 coreos-metadata[1962]: Jul 15 04:40:20.764 INFO Fetch failed with 404: resource not found Jul 15 04:40:20.765693 coreos-metadata[1962]: Jul 15 04:40:20.765 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 15 04:40:20.765693 coreos-metadata[1962]: Jul 15 04:40:20.765 INFO Fetch successful Jul 15 04:40:20.765693 coreos-metadata[1962]: Jul 15 04:40:20.765 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 15 04:40:20.772132 coreos-metadata[1962]: Jul 15 04:40:20.766 INFO Fetch successful Jul 15 04:40:20.772132 coreos-metadata[1962]: Jul 15 04:40:20.766 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 15 04:40:20.772132 coreos-metadata[1962]: Jul 15 04:40:20.767 INFO Fetch successful Jul 15 04:40:20.772347 jq[1992]: true Jul 15 04:40:20.772757 coreos-metadata[1962]: Jul 15 04:40:20.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 15 04:40:20.776659 coreos-metadata[1962]: Jul 15 04:40:20.773 INFO Fetch successful Jul 15 04:40:20.776659 coreos-metadata[1962]: Jul 15 04:40:20.774 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 15 04:40:20.776659 coreos-metadata[1962]: Jul 15 04:40:20.776 INFO Fetch successful Jul 15 04:40:20.821347 dbus-daemon[1963]: [system] SELinux support is enabled Jul 15 04:40:20.821742 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 04:40:20.830155 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 04:40:20.830221 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 04:40:20.835871 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 04:40:20.835909 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 04:40:20.864530 extend-filesystems[1966]: Resized partition /dev/nvme0n1p9 Jul 15 04:40:20.870546 dbus-daemon[1963]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1831 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 04:40:20.888096 dbus-daemon[1963]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 04:40:20.890784 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 03:00:30 UTC 2025 (1): Starting Jul 15 04:40:20.890784 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 04:40:20.890784 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: ---------------------------------------------------- Jul 15 04:40:20.890784 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: ntp-4 is maintained by Network Time Foundation, Jul 15 04:40:20.890784 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 04:40:20.890784 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: corporation. Support and training for ntp-4 are Jul 15 04:40:20.890784 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: available at https://www.nwtime.org/support Jul 15 04:40:20.890784 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: ---------------------------------------------------- Jul 15 04:40:20.890125 ntpd[1968]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 03:00:30 UTC 2025 (1): Starting Jul 15 04:40:20.890172 ntpd[1968]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 04:40:20.890191 ntpd[1968]: ---------------------------------------------------- Jul 15 04:40:20.890207 ntpd[1968]: ntp-4 is maintained by Network Time Foundation, Jul 15 04:40:20.890223 ntpd[1968]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 04:40:20.890239 ntpd[1968]: corporation. Support and training for ntp-4 are Jul 15 04:40:20.890256 ntpd[1968]: available at https://www.nwtime.org/support Jul 15 04:40:20.890273 ntpd[1968]: ---------------------------------------------------- Jul 15 04:40:20.900760 extend-filesystems[2023]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 04:40:20.907875 update_engine[1974]: I20250715 04:40:20.893535 1974 main.cc:92] Flatcar Update Engine starting Jul 15 04:40:20.922055 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: proto: precision = 0.096 usec (-23) Jul 15 04:40:20.918892 ntpd[1968]: proto: precision = 0.096 usec (-23) Jul 15 04:40:20.911922 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 04:40:20.923730 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 15 04:40:20.931762 systemd[1]: Started update-engine.service - Update Engine. Jul 15 04:40:20.937299 ntpd[1968]: basedate set to 2025-07-03 Jul 15 04:40:20.940511 update_engine[1974]: I20250715 04:40:20.940090 1974 update_check_scheduler.cc:74] Next update check in 2m51s Jul 15 04:40:20.940568 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: basedate set to 2025-07-03 Jul 15 04:40:20.940568 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: gps base set to 2025-07-06 (week 2374) Jul 15 04:40:20.937337 ntpd[1968]: gps base set to 2025-07-06 (week 2374) Jul 15 04:40:20.950658 ntpd[1968]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 04:40:20.953875 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 04:40:20.953875 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 04:40:20.953875 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 04:40:20.953875 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: Listen normally on 3 eth0 172.31.20.207:123 Jul 15 04:40:20.953875 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: Listen normally on 4 lo [::1]:123 Jul 15 04:40:20.953875 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: bind(21) AF_INET6 fe80::44b:18ff:feec:303%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 04:40:20.953875 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: unable to create socket on eth0 (5) for fe80::44b:18ff:feec:303%2#123 Jul 15 04:40:20.953875 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: failed to init interface for address fe80::44b:18ff:feec:303%2 Jul 15 04:40:20.953875 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: Listening on routing socket on fd #21 for interface updates Jul 15 04:40:20.950754 ntpd[1968]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 04:40:20.951008 ntpd[1968]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 04:40:20.951067 ntpd[1968]: Listen normally on 3 eth0 172.31.20.207:123 Jul 15 04:40:20.951129 ntpd[1968]: Listen normally on 4 lo [::1]:123 Jul 15 04:40:20.951200 ntpd[1968]: bind(21) AF_INET6 fe80::44b:18ff:feec:303%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 04:40:20.951235 ntpd[1968]: unable to create socket on eth0 (5) for fe80::44b:18ff:feec:303%2#123 Jul 15 04:40:20.951260 ntpd[1968]: failed to init interface for address fe80::44b:18ff:feec:303%2 Jul 15 04:40:20.951313 ntpd[1968]: Listening on routing socket on fd #21 for interface updates Jul 15 04:40:20.959644 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 15 04:40:20.959822 ntpd[1968]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 04:40:20.962858 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 04:40:20.962858 ntpd[1968]: 15 Jul 04:40:20 ntpd[1968]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 04:40:20.959881 ntpd[1968]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 04:40:20.975132 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 04:40:21.007562 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 04:40:21.012067 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 04:40:21.099674 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 15 04:40:21.114609 extend-filesystems[2023]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 15 04:40:21.114609 extend-filesystems[2023]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 04:40:21.114609 extend-filesystems[2023]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 15 04:40:21.122123 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 04:40:21.167311 bash[2050]: Updated "/home/core/.ssh/authorized_keys" Jul 15 04:40:21.167488 extend-filesystems[1966]: Resized filesystem in /dev/nvme0n1p9 Jul 15 04:40:21.122584 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 04:40:21.141835 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 04:40:21.150556 systemd[1]: Starting sshkeys.service... Jul 15 04:40:21.242923 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 04:40:21.253238 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 04:40:21.299933 systemd-logind[1973]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 04:40:21.300466 systemd-logind[1973]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 15 04:40:21.398409 locksmithd[2026]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 04:40:21.408341 dbus-daemon[1963]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 04:40:21.410896 dbus-daemon[1963]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2024 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 04:40:21.442517 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 04:40:21.445455 systemd-logind[1973]: New seat seat0. Jul 15 04:40:21.454521 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 04:40:21.467790 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 04:40:21.483381 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 04:40:21.688658 coreos-metadata[2059]: Jul 15 04:40:21.686 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 04:40:21.692723 coreos-metadata[2059]: Jul 15 04:40:21.691 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 15 04:40:21.698506 coreos-metadata[2059]: Jul 15 04:40:21.698 INFO Fetch successful Jul 15 04:40:21.698506 coreos-metadata[2059]: Jul 15 04:40:21.698 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 15 04:40:21.699916 coreos-metadata[2059]: Jul 15 04:40:21.699 INFO Fetch successful Jul 15 04:40:21.709499 unknown[2059]: wrote ssh authorized keys file for user: core Jul 15 04:40:21.754985 containerd[2000]: time="2025-07-15T04:40:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 04:40:21.754985 containerd[2000]: time="2025-07-15T04:40:21.754452660Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 04:40:21.791111 systemd-networkd[1831]: eth0: Gained IPv6LL Jul 15 04:40:21.797071 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 04:40:21.803771 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 04:40:21.809322 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 15 04:40:21.822186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:40:21.825417 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 04:40:21.860535 containerd[2000]: time="2025-07-15T04:40:21.860468533Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.196µs" Jul 15 04:40:21.868010 update-ssh-keys[2136]: Updated "/home/core/.ssh/authorized_keys" Jul 15 04:40:21.872467 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 04:40:21.887951 containerd[2000]: time="2025-07-15T04:40:21.879324961Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 04:40:21.887951 containerd[2000]: time="2025-07-15T04:40:21.879394369Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 04:40:21.887951 containerd[2000]: time="2025-07-15T04:40:21.880588549Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 04:40:21.887951 containerd[2000]: time="2025-07-15T04:40:21.880801213Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 04:40:21.887951 containerd[2000]: time="2025-07-15T04:40:21.880878697Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:40:21.887951 containerd[2000]: time="2025-07-15T04:40:21.881020981Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:40:21.887951 containerd[2000]: time="2025-07-15T04:40:21.881052937Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:40:21.887951 containerd[2000]: time="2025-07-15T04:40:21.881438521Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:40:21.887951 containerd[2000]: time="2025-07-15T04:40:21.881476669Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:40:21.887951 containerd[2000]: time="2025-07-15T04:40:21.881506165Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:40:21.887951 containerd[2000]: time="2025-07-15T04:40:21.881529433Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 04:40:21.887951 containerd[2000]: time="2025-07-15T04:40:21.883521073Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 04:40:21.882684 systemd[1]: Finished sshkeys.service. Jul 15 04:40:21.892822 containerd[2000]: time="2025-07-15T04:40:21.890348269Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:40:21.892822 containerd[2000]: time="2025-07-15T04:40:21.892059685Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:40:21.892822 containerd[2000]: time="2025-07-15T04:40:21.892108105Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 04:40:21.892822 containerd[2000]: time="2025-07-15T04:40:21.892232653Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 04:40:21.896747 containerd[2000]: time="2025-07-15T04:40:21.896133097Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 04:40:21.896747 containerd[2000]: time="2025-07-15T04:40:21.896431141Z" level=info msg="metadata content store policy set" policy=shared Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.921031297Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.921210877Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.921282889Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.921338413Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.921393517Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.921460693Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.921524593Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.921605617Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.921694501Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.921738049Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.921778393Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.921846277Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.922234633Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 04:40:21.922678 containerd[2000]: time="2025-07-15T04:40:21.922308745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 04:40:21.923343 containerd[2000]: time="2025-07-15T04:40:21.922383973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 04:40:21.923343 containerd[2000]: time="2025-07-15T04:40:21.922434469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 04:40:21.923343 containerd[2000]: time="2025-07-15T04:40:21.922466533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 04:40:21.923343 containerd[2000]: time="2025-07-15T04:40:21.922521181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 04:40:21.923343 containerd[2000]: time="2025-07-15T04:40:21.922577557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 04:40:21.939781 containerd[2000]: time="2025-07-15T04:40:21.938226061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 04:40:21.939781 containerd[2000]: time="2025-07-15T04:40:21.938305657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 04:40:21.939781 containerd[2000]: time="2025-07-15T04:40:21.938344357Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 04:40:21.939781 containerd[2000]: time="2025-07-15T04:40:21.938377945Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 04:40:21.939781 containerd[2000]: time="2025-07-15T04:40:21.938780617Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 04:40:21.939781 containerd[2000]: time="2025-07-15T04:40:21.938819425Z" level=info msg="Start snapshots syncer" Jul 15 04:40:21.939781 containerd[2000]: time="2025-07-15T04:40:21.938863609Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 04:40:21.940129 containerd[2000]: time="2025-07-15T04:40:21.939240061Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 04:40:21.940129 containerd[2000]: time="2025-07-15T04:40:21.939338101Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 04:40:21.946929 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 04:40:21.962633 containerd[2000]: time="2025-07-15T04:40:21.962277253Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.967815853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.971059381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.971116405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.971165917Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.971211385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.971270473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.971322481Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.971386573Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.971418493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.971447989Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.971502085Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.971538961Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:40:21.973176 containerd[2000]: time="2025-07-15T04:40:21.971563357Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:40:21.973842 containerd[2000]: time="2025-07-15T04:40:21.971589517Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:40:21.973842 containerd[2000]: time="2025-07-15T04:40:21.971610361Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 04:40:21.973842 containerd[2000]: time="2025-07-15T04:40:21.971663569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 04:40:21.973842 containerd[2000]: time="2025-07-15T04:40:21.971693089Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 04:40:21.973842 containerd[2000]: time="2025-07-15T04:40:21.971868253Z" level=info msg="runtime interface created" Jul 15 04:40:21.973842 containerd[2000]: time="2025-07-15T04:40:21.971888425Z" level=info msg="created NRI interface" Jul 15 04:40:21.973842 containerd[2000]: time="2025-07-15T04:40:21.971913217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 04:40:21.973842 containerd[2000]: time="2025-07-15T04:40:21.971953561Z" level=info msg="Connect containerd service" Jul 15 04:40:21.973842 containerd[2000]: time="2025-07-15T04:40:21.972025957Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 04:40:21.990455 containerd[2000]: time="2025-07-15T04:40:21.988087597Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:40:22.257761 polkitd[2118]: Started polkitd version 126 Jul 15 04:40:22.270567 amazon-ssm-agent[2150]: Initializing new seelog logger Jul 15 04:40:22.274847 amazon-ssm-agent[2150]: New Seelog Logger Creation Complete Jul 15 04:40:22.274847 amazon-ssm-agent[2150]: 2025/07/15 04:40:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:40:22.274847 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:40:22.275361 amazon-ssm-agent[2150]: 2025/07/15 04:40:22 processing appconfig overrides Jul 15 04:40:22.280406 amazon-ssm-agent[2150]: 2025/07/15 04:40:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:40:22.280406 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:40:22.280406 amazon-ssm-agent[2150]: 2025/07/15 04:40:22 processing appconfig overrides Jul 15 04:40:22.280406 amazon-ssm-agent[2150]: 2025/07/15 04:40:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:40:22.280406 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:40:22.280406 amazon-ssm-agent[2150]: 2025/07/15 04:40:22 processing appconfig overrides Jul 15 04:40:22.287885 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.2788 INFO Proxy environment variables: Jul 15 04:40:22.299357 amazon-ssm-agent[2150]: 2025/07/15 04:40:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:40:22.299357 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:40:22.299357 amazon-ssm-agent[2150]: 2025/07/15 04:40:22 processing appconfig overrides Jul 15 04:40:22.314821 polkitd[2118]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 04:40:22.326376 polkitd[2118]: Loading rules from directory /run/polkit-1/rules.d Jul 15 04:40:22.326498 polkitd[2118]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 04:40:22.336914 polkitd[2118]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 04:40:22.337024 polkitd[2118]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 04:40:22.337117 polkitd[2118]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 04:40:22.349116 polkitd[2118]: Finished loading, compiling and executing 2 rules Jul 15 04:40:22.351037 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 04:40:22.363808 dbus-daemon[1963]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 04:40:22.367491 containerd[2000]: time="2025-07-15T04:40:22.366790295Z" level=info msg="Start subscribing containerd event" Jul 15 04:40:22.369267 containerd[2000]: time="2025-07-15T04:40:22.368383487Z" level=info msg="Start recovering state" Jul 15 04:40:22.369267 containerd[2000]: time="2025-07-15T04:40:22.368771543Z" level=info msg="Start event monitor" Jul 15 04:40:22.369267 containerd[2000]: time="2025-07-15T04:40:22.368843411Z" level=info msg="Start cni network conf syncer for default" Jul 15 04:40:22.369267 containerd[2000]: time="2025-07-15T04:40:22.368865167Z" level=info msg="Start streaming server" Jul 15 04:40:22.369267 containerd[2000]: time="2025-07-15T04:40:22.368918639Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 04:40:22.369267 containerd[2000]: time="2025-07-15T04:40:22.368943707Z" level=info msg="runtime interface starting up..." Jul 15 04:40:22.369267 containerd[2000]: time="2025-07-15T04:40:22.368963015Z" level=info msg="starting plugins..." Jul 15 04:40:22.369815 containerd[2000]: time="2025-07-15T04:40:22.369708179Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 04:40:22.370741 polkitd[2118]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 04:40:22.374527 containerd[2000]: time="2025-07-15T04:40:22.374128931Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 04:40:22.375668 containerd[2000]: time="2025-07-15T04:40:22.375291647Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 04:40:22.376827 containerd[2000]: time="2025-07-15T04:40:22.376762787Z" level=info msg="containerd successfully booted in 0.628285s" Jul 15 04:40:22.376915 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 04:40:22.393380 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.2789 INFO http_proxy: Jul 15 04:40:22.453843 systemd-resolved[1832]: System hostname changed to 'ip-172-31-20-207'. Jul 15 04:40:22.453980 systemd-hostnamed[2024]: Hostname set to (transient) Jul 15 04:40:22.495650 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.2789 INFO no_proxy: Jul 15 04:40:22.594880 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.2789 INFO https_proxy: Jul 15 04:40:22.692769 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.2791 INFO Checking if agent identity type OnPrem can be assumed Jul 15 04:40:22.791465 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.2792 INFO Checking if agent identity type EC2 can be assumed Jul 15 04:40:22.870162 sshd_keygen[2009]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 04:40:22.891709 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.5204 INFO Agent will take identity from EC2 Jul 15 04:40:22.929762 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 04:40:22.941174 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 04:40:22.949272 systemd[1]: Started sshd@0-172.31.20.207:22-139.178.89.65:59666.service - OpenSSH per-connection server daemon (139.178.89.65:59666). Jul 15 04:40:22.994902 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.5242 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 15 04:40:22.998973 tar[1984]: linux-arm64/README.md Jul 15 04:40:23.012688 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 04:40:23.014842 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 04:40:23.031232 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 04:40:23.052334 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 04:40:23.094096 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.5242 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 15 04:40:23.094363 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 04:40:23.107427 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 04:40:23.117669 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 04:40:23.122586 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 04:40:23.193826 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.5242 INFO [amazon-ssm-agent] Starting Core Agent Jul 15 04:40:23.254215 sshd[2216]: Accepted publickey for core from 139.178.89.65 port 59666 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:40:23.261816 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:40:23.293950 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.5242 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 15 04:40:23.293782 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 04:40:23.303847 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 04:40:23.317772 systemd-logind[1973]: New session 1 of user core. Jul 15 04:40:23.346430 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 04:40:23.358414 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 04:40:23.393599 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.5242 INFO [Registrar] Starting registrar module Jul 15 04:40:23.396225 (systemd)[2231]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 04:40:23.404999 systemd-logind[1973]: New session c1 of user core. Jul 15 04:40:23.495128 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.5274 INFO [EC2Identity] Checking disk for registration info Jul 15 04:40:23.549707 amazon-ssm-agent[2150]: 2025/07/15 04:40:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:40:23.551738 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 04:40:23.552193 amazon-ssm-agent[2150]: 2025/07/15 04:40:23 processing appconfig overrides Jul 15 04:40:23.590647 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.5274 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 15 04:40:23.590647 amazon-ssm-agent[2150]: 2025-07-15 04:40:22.5274 INFO [EC2Identity] Generating registration keypair Jul 15 04:40:23.590647 amazon-ssm-agent[2150]: 2025-07-15 04:40:23.4939 INFO [EC2Identity] Checking write access before registering Jul 15 04:40:23.590647 amazon-ssm-agent[2150]: 2025-07-15 04:40:23.4948 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 15 04:40:23.590647 amazon-ssm-agent[2150]: 2025-07-15 04:40:23.5483 INFO [EC2Identity] EC2 registration was successful. Jul 15 04:40:23.590647 amazon-ssm-agent[2150]: 2025-07-15 04:40:23.5484 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 15 04:40:23.590647 amazon-ssm-agent[2150]: 2025-07-15 04:40:23.5485 INFO [CredentialRefresher] credentialRefresher has started Jul 15 04:40:23.590647 amazon-ssm-agent[2150]: 2025-07-15 04:40:23.5485 INFO [CredentialRefresher] Starting credentials refresher loop Jul 15 04:40:23.590647 amazon-ssm-agent[2150]: 2025-07-15 04:40:23.5899 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 15 04:40:23.590647 amazon-ssm-agent[2150]: 2025-07-15 04:40:23.5902 INFO [CredentialRefresher] Credentials ready Jul 15 04:40:23.593905 amazon-ssm-agent[2150]: 2025-07-15 04:40:23.5904 INFO [CredentialRefresher] Next credential rotation will be in 29.9999909638 minutes Jul 15 04:40:23.763313 systemd[2231]: Queued start job for default target default.target. Jul 15 04:40:23.771109 systemd[2231]: Created slice app.slice - User Application Slice. Jul 15 04:40:23.771186 systemd[2231]: Reached target paths.target - Paths. Jul 15 04:40:23.771285 systemd[2231]: Reached target timers.target - Timers. Jul 15 04:40:23.774011 systemd[2231]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 04:40:23.803539 systemd[2231]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 04:40:23.804056 systemd[2231]: Reached target sockets.target - Sockets. Jul 15 04:40:23.804406 systemd[2231]: Reached target basic.target - Basic System. Jul 15 04:40:23.804747 systemd[2231]: Reached target default.target - Main User Target. Jul 15 04:40:23.804767 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 04:40:23.804825 systemd[2231]: Startup finished in 384ms. Jul 15 04:40:23.821011 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 04:40:23.906364 ntpd[1968]: Listen normally on 6 eth0 [fe80::44b:18ff:feec:303%2]:123 Jul 15 04:40:23.907751 ntpd[1968]: 15 Jul 04:40:23 ntpd[1968]: Listen normally on 6 eth0 [fe80::44b:18ff:feec:303%2]:123 Jul 15 04:40:23.989793 systemd[1]: Started sshd@1-172.31.20.207:22-139.178.89.65:59682.service - OpenSSH per-connection server daemon (139.178.89.65:59682). Jul 15 04:40:24.184805 sshd[2242]: Accepted publickey for core from 139.178.89.65 port 59682 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:40:24.188355 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:40:24.201888 systemd-logind[1973]: New session 2 of user core. Jul 15 04:40:24.211082 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 04:40:24.347855 sshd[2245]: Connection closed by 139.178.89.65 port 59682 Jul 15 04:40:24.349605 sshd-session[2242]: pam_unix(sshd:session): session closed for user core Jul 15 04:40:24.358174 systemd[1]: sshd@1-172.31.20.207:22-139.178.89.65:59682.service: Deactivated successfully. Jul 15 04:40:24.362463 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 04:40:24.367868 systemd-logind[1973]: Session 2 logged out. Waiting for processes to exit. Jul 15 04:40:24.388383 systemd[1]: Started sshd@2-172.31.20.207:22-139.178.89.65:59698.service - OpenSSH per-connection server daemon (139.178.89.65:59698). Jul 15 04:40:24.399480 systemd-logind[1973]: Removed session 2. Jul 15 04:40:24.585232 sshd[2251]: Accepted publickey for core from 139.178.89.65 port 59698 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:40:24.588132 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:40:24.600769 systemd-logind[1973]: New session 3 of user core. Jul 15 04:40:24.601981 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 04:40:24.629638 amazon-ssm-agent[2150]: 2025-07-15 04:40:24.6292 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 15 04:40:24.730928 amazon-ssm-agent[2150]: 2025-07-15 04:40:24.6329 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2257) started Jul 15 04:40:24.739128 sshd[2256]: Connection closed by 139.178.89.65 port 59698 Jul 15 04:40:24.739967 sshd-session[2251]: pam_unix(sshd:session): session closed for user core Jul 15 04:40:24.750468 systemd[1]: sshd@2-172.31.20.207:22-139.178.89.65:59698.service: Deactivated successfully. Jul 15 04:40:24.756778 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 04:40:24.759818 systemd-logind[1973]: Session 3 logged out. Waiting for processes to exit. Jul 15 04:40:24.763401 systemd-logind[1973]: Removed session 3. Jul 15 04:40:24.831223 amazon-ssm-agent[2150]: 2025-07-15 04:40:24.6329 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 15 04:40:24.907058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:24.913392 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 04:40:24.917575 systemd[1]: Startup finished in 3.694s (kernel) + 9.057s (initrd) + 10.246s (userspace) = 22.997s. Jul 15 04:40:24.921416 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:40:26.274637 kubelet[2278]: E0715 04:40:26.274544 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:40:26.280541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:40:26.281229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:40:26.282114 systemd[1]: kubelet.service: Consumed 1.524s CPU time, 259.2M memory peak. Jul 15 04:40:27.589955 systemd-resolved[1832]: Clock change detected. Flushing caches. Jul 15 04:40:34.464558 systemd[1]: Started sshd@3-172.31.20.207:22-139.178.89.65:57012.service - OpenSSH per-connection server daemon (139.178.89.65:57012). Jul 15 04:40:34.667062 sshd[2290]: Accepted publickey for core from 139.178.89.65 port 57012 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:40:34.669817 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:40:34.679210 systemd-logind[1973]: New session 4 of user core. Jul 15 04:40:34.689473 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 04:40:34.815451 sshd[2293]: Connection closed by 139.178.89.65 port 57012 Jul 15 04:40:34.817248 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Jul 15 04:40:34.826202 systemd[1]: sshd@3-172.31.20.207:22-139.178.89.65:57012.service: Deactivated successfully. Jul 15 04:40:34.830695 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 04:40:34.833183 systemd-logind[1973]: Session 4 logged out. Waiting for processes to exit. Jul 15 04:40:34.836369 systemd-logind[1973]: Removed session 4. Jul 15 04:40:34.855968 systemd[1]: Started sshd@4-172.31.20.207:22-139.178.89.65:57014.service - OpenSSH per-connection server daemon (139.178.89.65:57014). Jul 15 04:40:35.052216 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 57014 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:40:35.055475 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:40:35.065214 systemd-logind[1973]: New session 5 of user core. Jul 15 04:40:35.072488 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 04:40:35.193535 sshd[2302]: Connection closed by 139.178.89.65 port 57014 Jul 15 04:40:35.193392 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Jul 15 04:40:35.200954 systemd[1]: sshd@4-172.31.20.207:22-139.178.89.65:57014.service: Deactivated successfully. Jul 15 04:40:35.204360 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 04:40:35.206380 systemd-logind[1973]: Session 5 logged out. Waiting for processes to exit. Jul 15 04:40:35.210013 systemd-logind[1973]: Removed session 5. Jul 15 04:40:35.234952 systemd[1]: Started sshd@5-172.31.20.207:22-139.178.89.65:57022.service - OpenSSH per-connection server daemon (139.178.89.65:57022). Jul 15 04:40:35.432169 sshd[2308]: Accepted publickey for core from 139.178.89.65 port 57022 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:40:35.435979 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:40:35.449279 systemd-logind[1973]: New session 6 of user core. Jul 15 04:40:35.456432 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 04:40:35.586452 sshd[2311]: Connection closed by 139.178.89.65 port 57022 Jul 15 04:40:35.588084 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Jul 15 04:40:35.595880 systemd-logind[1973]: Session 6 logged out. Waiting for processes to exit. Jul 15 04:40:35.596832 systemd[1]: sshd@5-172.31.20.207:22-139.178.89.65:57022.service: Deactivated successfully. Jul 15 04:40:35.601745 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 04:40:35.608304 systemd-logind[1973]: Removed session 6. Jul 15 04:40:35.627586 systemd[1]: Started sshd@6-172.31.20.207:22-139.178.89.65:57028.service - OpenSSH per-connection server daemon (139.178.89.65:57028). Jul 15 04:40:35.830802 sshd[2317]: Accepted publickey for core from 139.178.89.65 port 57028 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:40:35.833019 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:40:35.842871 systemd-logind[1973]: New session 7 of user core. Jul 15 04:40:35.853494 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 04:40:35.985638 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 04:40:35.988366 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:40:35.990972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 04:40:35.997475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:40:36.010907 sudo[2321]: pam_unix(sudo:session): session closed for user root Jul 15 04:40:36.037170 sshd[2320]: Connection closed by 139.178.89.65 port 57028 Jul 15 04:40:36.038087 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Jul 15 04:40:36.050034 systemd[1]: sshd@6-172.31.20.207:22-139.178.89.65:57028.service: Deactivated successfully. Jul 15 04:40:36.058724 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 04:40:36.061735 systemd-logind[1973]: Session 7 logged out. Waiting for processes to exit. Jul 15 04:40:36.088625 systemd[1]: Started sshd@7-172.31.20.207:22-139.178.89.65:57032.service - OpenSSH per-connection server daemon (139.178.89.65:57032). Jul 15 04:40:36.098537 systemd-logind[1973]: Removed session 7. Jul 15 04:40:36.303051 sshd[2330]: Accepted publickey for core from 139.178.89.65 port 57032 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:40:36.306073 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:40:36.315785 systemd-logind[1973]: New session 8 of user core. Jul 15 04:40:36.324458 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 04:40:36.402165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:36.417066 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:40:36.433961 sudo[2341]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 04:40:36.434757 sudo[2341]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:40:36.444361 sudo[2341]: pam_unix(sudo:session): session closed for user root Jul 15 04:40:36.455899 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 04:40:36.456708 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:40:36.479476 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:40:36.512142 kubelet[2339]: E0715 04:40:36.510876 2339 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:40:36.524263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:40:36.524597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:40:36.525708 systemd[1]: kubelet.service: Consumed 369ms CPU time, 105.2M memory peak. Jul 15 04:40:36.553694 augenrules[2369]: No rules Jul 15 04:40:36.556347 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:40:36.558271 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:40:36.560968 sudo[2340]: pam_unix(sudo:session): session closed for user root Jul 15 04:40:36.584168 sshd[2333]: Connection closed by 139.178.89.65 port 57032 Jul 15 04:40:36.585521 sshd-session[2330]: pam_unix(sshd:session): session closed for user core Jul 15 04:40:36.592370 systemd-logind[1973]: Session 8 logged out. Waiting for processes to exit. Jul 15 04:40:36.593896 systemd[1]: sshd@7-172.31.20.207:22-139.178.89.65:57032.service: Deactivated successfully. Jul 15 04:40:36.597001 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 04:40:36.601919 systemd-logind[1973]: Removed session 8. Jul 15 04:40:36.624392 systemd[1]: Started sshd@8-172.31.20.207:22-139.178.89.65:57040.service - OpenSSH per-connection server daemon (139.178.89.65:57040). Jul 15 04:40:36.818053 sshd[2378]: Accepted publickey for core from 139.178.89.65 port 57040 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:40:36.819951 sshd-session[2378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:40:36.829018 systemd-logind[1973]: New session 9 of user core. Jul 15 04:40:36.835467 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 04:40:36.941307 sudo[2382]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 04:40:36.941984 sudo[2382]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:40:37.535794 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 04:40:37.559764 (dockerd)[2400]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 04:40:37.983639 dockerd[2400]: time="2025-07-15T04:40:37.982394707Z" level=info msg="Starting up" Jul 15 04:40:37.984650 dockerd[2400]: time="2025-07-15T04:40:37.984597163Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 04:40:38.005348 dockerd[2400]: time="2025-07-15T04:40:38.005224083Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 04:40:38.039968 systemd[1]: var-lib-docker-metacopy\x2dcheck593055594-merged.mount: Deactivated successfully. Jul 15 04:40:38.060743 dockerd[2400]: time="2025-07-15T04:40:38.060344524Z" level=info msg="Loading containers: start." Jul 15 04:40:38.079390 kernel: Initializing XFRM netlink socket Jul 15 04:40:38.448681 (udev-worker)[2421]: Network interface NamePolicy= disabled on kernel command line. Jul 15 04:40:38.529544 systemd-networkd[1831]: docker0: Link UP Jul 15 04:40:38.540513 dockerd[2400]: time="2025-07-15T04:40:38.540422310Z" level=info msg="Loading containers: done." Jul 15 04:40:38.574251 dockerd[2400]: time="2025-07-15T04:40:38.574174758Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 04:40:38.574500 dockerd[2400]: time="2025-07-15T04:40:38.574306278Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 04:40:38.574568 dockerd[2400]: time="2025-07-15T04:40:38.574493142Z" level=info msg="Initializing buildkit" Jul 15 04:40:38.634864 dockerd[2400]: time="2025-07-15T04:40:38.634798842Z" level=info msg="Completed buildkit initialization" Jul 15 04:40:38.651863 dockerd[2400]: time="2025-07-15T04:40:38.651668622Z" level=info msg="Daemon has completed initialization" Jul 15 04:40:38.652425 dockerd[2400]: time="2025-07-15T04:40:38.652154190Z" level=info msg="API listen on /run/docker.sock" Jul 15 04:40:38.652826 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 04:40:39.717637 containerd[2000]: time="2025-07-15T04:40:39.717520856Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 15 04:40:40.390812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241814973.mount: Deactivated successfully. Jul 15 04:40:41.811507 containerd[2000]: time="2025-07-15T04:40:41.810384826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:41.812876 containerd[2000]: time="2025-07-15T04:40:41.812421034Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jul 15 04:40:41.814882 containerd[2000]: time="2025-07-15T04:40:41.814813654Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:41.820427 containerd[2000]: time="2025-07-15T04:40:41.820356286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:41.822898 containerd[2000]: time="2025-07-15T04:40:41.822596998Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 2.104966882s" Jul 15 04:40:41.822898 containerd[2000]: time="2025-07-15T04:40:41.822670186Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 15 04:40:41.825971 containerd[2000]: time="2025-07-15T04:40:41.825822502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 15 04:40:43.325951 containerd[2000]: time="2025-07-15T04:40:43.325629766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:43.327457 containerd[2000]: time="2025-07-15T04:40:43.327379942Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jul 15 04:40:43.328331 containerd[2000]: time="2025-07-15T04:40:43.328213822Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:43.338809 containerd[2000]: time="2025-07-15T04:40:43.338720518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:43.340933 containerd[2000]: time="2025-07-15T04:40:43.340869898Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.514985872s" Jul 15 04:40:43.341354 containerd[2000]: time="2025-07-15T04:40:43.341145514Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 15 04:40:43.342291 containerd[2000]: time="2025-07-15T04:40:43.342193954Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 15 04:40:44.529538 containerd[2000]: time="2025-07-15T04:40:44.529447356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:44.531317 containerd[2000]: time="2025-07-15T04:40:44.531233808Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jul 15 04:40:44.532557 containerd[2000]: time="2025-07-15T04:40:44.532467720Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:44.537064 containerd[2000]: time="2025-07-15T04:40:44.537009804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:44.539455 containerd[2000]: time="2025-07-15T04:40:44.539237268Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.196983158s" Jul 15 04:40:44.539455 containerd[2000]: time="2025-07-15T04:40:44.539301648Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 15 04:40:44.540542 containerd[2000]: time="2025-07-15T04:40:44.540184428Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 15 04:40:45.860735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1798986860.mount: Deactivated successfully. Jul 15 04:40:46.461044 containerd[2000]: time="2025-07-15T04:40:46.460938949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:46.463662 containerd[2000]: time="2025-07-15T04:40:46.463537129Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jul 15 04:40:46.466084 containerd[2000]: time="2025-07-15T04:40:46.465999493Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:46.470453 containerd[2000]: time="2025-07-15T04:40:46.470366893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:46.471824 containerd[2000]: time="2025-07-15T04:40:46.471595921Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.931348589s" Jul 15 04:40:46.471824 containerd[2000]: time="2025-07-15T04:40:46.471654433Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 15 04:40:46.472224 containerd[2000]: time="2025-07-15T04:40:46.472173709Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 04:40:46.775164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 04:40:46.778363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:40:47.074229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616040671.mount: Deactivated successfully. Jul 15 04:40:47.209093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:47.231028 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:40:47.340207 kubelet[2702]: E0715 04:40:47.339527 2702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:40:47.347357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:40:47.348086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:40:47.351945 systemd[1]: kubelet.service: Consumed 350ms CPU time, 105.5M memory peak. Jul 15 04:40:48.427406 containerd[2000]: time="2025-07-15T04:40:48.426488019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:48.432317 containerd[2000]: time="2025-07-15T04:40:48.432250419Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 15 04:40:48.439493 containerd[2000]: time="2025-07-15T04:40:48.438376287Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:48.448240 containerd[2000]: time="2025-07-15T04:40:48.448158975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:48.451720 containerd[2000]: time="2025-07-15T04:40:48.451613427Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.979380462s" Jul 15 04:40:48.451968 containerd[2000]: time="2025-07-15T04:40:48.451929111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 15 04:40:48.454799 containerd[2000]: time="2025-07-15T04:40:48.453867303Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 04:40:48.969892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984117921.mount: Deactivated successfully. Jul 15 04:40:48.983474 containerd[2000]: time="2025-07-15T04:40:48.983382030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:40:48.986304 containerd[2000]: time="2025-07-15T04:40:48.986235594Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 15 04:40:48.988799 containerd[2000]: time="2025-07-15T04:40:48.988728438Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:40:48.994681 containerd[2000]: time="2025-07-15T04:40:48.994568190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:40:48.996173 containerd[2000]: time="2025-07-15T04:40:48.995954934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 541.983771ms" Jul 15 04:40:48.996173 containerd[2000]: time="2025-07-15T04:40:48.996007218Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 04:40:48.996745 containerd[2000]: time="2025-07-15T04:40:48.996586002Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 04:40:49.606986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1963815321.mount: Deactivated successfully. Jul 15 04:40:51.760495 containerd[2000]: time="2025-07-15T04:40:51.760358576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:51.764024 containerd[2000]: time="2025-07-15T04:40:51.763931012Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 15 04:40:51.766603 containerd[2000]: time="2025-07-15T04:40:51.766519544Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:51.775321 containerd[2000]: time="2025-07-15T04:40:51.775219976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:40:51.777789 containerd[2000]: time="2025-07-15T04:40:51.777574184Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.78093903s" Jul 15 04:40:51.777789 containerd[2000]: time="2025-07-15T04:40:51.777639992Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 15 04:40:52.170754 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 04:40:57.551229 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 15 04:40:57.556415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:40:57.899346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:40:57.907872 (kubelet)[2841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:40:57.981873 kubelet[2841]: E0715 04:40:57.981663 2841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:40:57.987625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:40:57.988084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:40:57.988954 systemd[1]: kubelet.service: Consumed 285ms CPU time, 104.9M memory peak. Jul 15 04:41:00.242353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:41:00.243360 systemd[1]: kubelet.service: Consumed 285ms CPU time, 104.9M memory peak. Jul 15 04:41:00.247273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:41:00.294055 systemd[1]: Reload requested from client PID 2855 ('systemctl') (unit session-9.scope)... Jul 15 04:41:00.294088 systemd[1]: Reloading... Jul 15 04:41:00.531144 zram_generator::config[2905]: No configuration found. Jul 15 04:41:00.734005 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:41:00.999905 systemd[1]: Reloading finished in 704 ms. Jul 15 04:41:01.105898 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 04:41:01.106121 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 04:41:01.106784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:41:01.106885 systemd[1]: kubelet.service: Consumed 221ms CPU time, 95M memory peak. Jul 15 04:41:01.110420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:41:01.449500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:41:01.466008 (kubelet)[2963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:41:01.545526 kubelet[2963]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:41:01.545526 kubelet[2963]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 04:41:01.545526 kubelet[2963]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:41:01.546353 kubelet[2963]: I0715 04:41:01.545611 2963 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:41:02.445727 kubelet[2963]: I0715 04:41:02.445654 2963 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 04:41:02.445727 kubelet[2963]: I0715 04:41:02.445704 2963 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:41:02.446196 kubelet[2963]: I0715 04:41:02.446162 2963 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 04:41:02.500365 kubelet[2963]: E0715 04:41:02.500298 2963 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.207:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.207:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 04:41:02.502316 kubelet[2963]: I0715 04:41:02.502044 2963 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:41:02.519325 kubelet[2963]: I0715 04:41:02.519293 2963 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:41:02.525838 kubelet[2963]: I0715 04:41:02.525435 2963 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:41:02.526062 kubelet[2963]: I0715 04:41:02.526015 2963 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:41:02.526374 kubelet[2963]: I0715 04:41:02.526063 2963 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-207","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:41:02.526548 kubelet[2963]: I0715 04:41:02.526514 2963 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:41:02.526548 kubelet[2963]: I0715 04:41:02.526536 2963 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 04:41:02.528363 kubelet[2963]: I0715 04:41:02.528314 2963 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:41:02.534613 kubelet[2963]: I0715 04:41:02.534549 2963 kubelet.go:480] "Attempting to sync node with API server" Jul 15 04:41:02.534613 kubelet[2963]: I0715 04:41:02.534600 2963 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:41:02.537423 kubelet[2963]: I0715 04:41:02.537366 2963 kubelet.go:386] "Adding apiserver pod source" Jul 15 04:41:02.540093 kubelet[2963]: I0715 04:41:02.540051 2963 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:41:02.542709 kubelet[2963]: E0715 04:41:02.542666 2963 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-207&limit=500&resourceVersion=0\": dial tcp 172.31.20.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 04:41:02.546088 kubelet[2963]: E0715 04:41:02.546033 2963 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.207:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 04:41:02.546889 kubelet[2963]: I0715 04:41:02.546858 2963 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:41:02.548262 kubelet[2963]: I0715 04:41:02.548230 2963 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 04:41:02.548598 kubelet[2963]: W0715 04:41:02.548576 2963 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 04:41:02.560943 kubelet[2963]: I0715 04:41:02.560882 2963 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 04:41:02.561079 kubelet[2963]: I0715 04:41:02.560965 2963 server.go:1289] "Started kubelet" Jul 15 04:41:02.575146 kubelet[2963]: I0715 04:41:02.574487 2963 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:41:02.575348 kubelet[2963]: E0715 04:41:02.572230 2963 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.207:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.207:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-207.185252ff8e7814dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-207,UID:ip-172-31-20-207,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-207,},FirstTimestamp:2025-07-15 04:41:02.560916701 +0000 UTC m=+1.086999198,LastTimestamp:2025-07-15 04:41:02.560916701 +0000 UTC m=+1.086999198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-207,}" Jul 15 04:41:02.575671 kubelet[2963]: I0715 04:41:02.575612 2963 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:41:02.576023 kubelet[2963]: I0715 04:41:02.575979 2963 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:41:02.578377 kubelet[2963]: I0715 04:41:02.578343 2963 server.go:317] "Adding debug handlers to kubelet server" Jul 15 04:41:02.584501 kubelet[2963]: I0715 04:41:02.584415 2963 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:41:02.584978 kubelet[2963]: I0715 04:41:02.584950 2963 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:41:02.586379 kubelet[2963]: I0715 04:41:02.586232 2963 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 04:41:02.589396 kubelet[2963]: E0715 04:41:02.588174 2963 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-207\" not found" Jul 15 04:41:02.590817 kubelet[2963]: I0715 04:41:02.590766 2963 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 04:41:02.591025 kubelet[2963]: I0715 04:41:02.590989 2963 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 04:41:02.591132 kubelet[2963]: I0715 04:41:02.591082 2963 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:41:02.592019 kubelet[2963]: I0715 04:41:02.591968 2963 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:41:02.595752 kubelet[2963]: E0715 04:41:02.595695 2963 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 04:41:02.596742 kubelet[2963]: I0715 04:41:02.596698 2963 factory.go:223] Registration of the containerd container factory successfully Jul 15 04:41:02.597063 kubelet[2963]: I0715 04:41:02.596990 2963 factory.go:223] Registration of the systemd container factory successfully Jul 15 04:41:02.597468 kubelet[2963]: E0715 04:41:02.597389 2963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-207?timeout=10s\": dial tcp 172.31.20.207:6443: connect: connection refused" interval="200ms" Jul 15 04:41:02.632177 kubelet[2963]: I0715 04:41:02.630736 2963 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 04:41:02.632177 kubelet[2963]: I0715 04:41:02.630772 2963 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 04:41:02.632177 kubelet[2963]: I0715 04:41:02.630801 2963 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:41:02.634562 kubelet[2963]: I0715 04:41:02.634464 2963 policy_none.go:49] "None policy: Start" Jul 15 04:41:02.634678 kubelet[2963]: I0715 04:41:02.634562 2963 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 04:41:02.634678 kubelet[2963]: I0715 04:41:02.634612 2963 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:41:02.637391 kubelet[2963]: I0715 04:41:02.637322 2963 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 04:41:02.637543 kubelet[2963]: I0715 04:41:02.637418 2963 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 04:41:02.638154 kubelet[2963]: I0715 04:41:02.637457 2963 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 04:41:02.638154 kubelet[2963]: I0715 04:41:02.637820 2963 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 04:41:02.638154 kubelet[2963]: E0715 04:41:02.637915 2963 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:41:02.640534 kubelet[2963]: E0715 04:41:02.640476 2963 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 04:41:02.650093 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 04:41:02.668770 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 04:41:02.675718 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 04:41:02.688309 kubelet[2963]: E0715 04:41:02.688251 2963 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-207\" not found" Jul 15 04:41:02.688727 kubelet[2963]: E0715 04:41:02.688405 2963 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 04:41:02.689423 kubelet[2963]: I0715 04:41:02.689369 2963 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:41:02.689671 kubelet[2963]: I0715 04:41:02.689403 2963 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:41:02.691925 kubelet[2963]: I0715 04:41:02.690176 2963 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:41:02.693524 kubelet[2963]: E0715 04:41:02.693477 2963 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 04:41:02.693781 kubelet[2963]: E0715 04:41:02.693740 2963 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-207\" not found" Jul 15 04:41:02.759911 systemd[1]: Created slice kubepods-burstable-pod5455a6fec75aeb600595c7e51b6d76e1.slice - libcontainer container kubepods-burstable-pod5455a6fec75aeb600595c7e51b6d76e1.slice. Jul 15 04:41:02.773860 kubelet[2963]: E0715 04:41:02.773799 2963 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-207\" not found" node="ip-172-31-20-207" Jul 15 04:41:02.782192 systemd[1]: Created slice kubepods-burstable-podbdcada75f8537484f13e588f1b9210b7.slice - libcontainer container kubepods-burstable-podbdcada75f8537484f13e588f1b9210b7.slice. Jul 15 04:41:02.788605 kubelet[2963]: E0715 04:41:02.788552 2963 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-207\" not found" node="ip-172-31-20-207" Jul 15 04:41:02.792886 systemd[1]: Created slice kubepods-burstable-pod89863e38e91025150790b0bd74c00fbc.slice - libcontainer container kubepods-burstable-pod89863e38e91025150790b0bd74c00fbc.slice. Jul 15 04:41:02.803989 kubelet[2963]: E0715 04:41:02.802715 2963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-207?timeout=10s\": dial tcp 172.31.20.207:6443: connect: connection refused" interval="400ms" Jul 15 04:41:02.803989 kubelet[2963]: I0715 04:41:02.802964 2963 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-207" Jul 15 04:41:02.805663 kubelet[2963]: E0715 04:41:02.805595 2963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.207:6443/api/v1/nodes\": dial tcp 172.31.20.207:6443: connect: connection refused" node="ip-172-31-20-207" Jul 15 04:41:02.806591 kubelet[2963]: E0715 04:41:02.806523 2963 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-207\" not found" node="ip-172-31-20-207" Jul 15 04:41:02.896590 kubelet[2963]: I0715 04:41:02.896465 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89863e38e91025150790b0bd74c00fbc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-207\" (UID: \"89863e38e91025150790b0bd74c00fbc\") " pod="kube-system/kube-apiserver-ip-172-31-20-207" Jul 15 04:41:02.896711 kubelet[2963]: I0715 04:41:02.896610 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5455a6fec75aeb600595c7e51b6d76e1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-207\" (UID: \"5455a6fec75aeb600595c7e51b6d76e1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:02.896711 kubelet[2963]: I0715 04:41:02.896695 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5455a6fec75aeb600595c7e51b6d76e1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-207\" (UID: \"5455a6fec75aeb600595c7e51b6d76e1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:02.896848 kubelet[2963]: I0715 04:41:02.896782 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bdcada75f8537484f13e588f1b9210b7-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-207\" (UID: \"bdcada75f8537484f13e588f1b9210b7\") " pod="kube-system/kube-scheduler-ip-172-31-20-207" Jul 15 04:41:02.896899 kubelet[2963]: I0715 04:41:02.896866 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89863e38e91025150790b0bd74c00fbc-ca-certs\") pod \"kube-apiserver-ip-172-31-20-207\" (UID: \"89863e38e91025150790b0bd74c00fbc\") " pod="kube-system/kube-apiserver-ip-172-31-20-207" Jul 15 04:41:02.896989 kubelet[2963]: I0715 04:41:02.896907 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5455a6fec75aeb600595c7e51b6d76e1-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-207\" (UID: \"5455a6fec75aeb600595c7e51b6d76e1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:02.897079 kubelet[2963]: I0715 04:41:02.896999 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5455a6fec75aeb600595c7e51b6d76e1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-207\" (UID: \"5455a6fec75aeb600595c7e51b6d76e1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:02.897188 kubelet[2963]: I0715 04:41:02.897155 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5455a6fec75aeb600595c7e51b6d76e1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-207\" (UID: \"5455a6fec75aeb600595c7e51b6d76e1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:02.897296 kubelet[2963]: I0715 04:41:02.897255 2963 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89863e38e91025150790b0bd74c00fbc-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-207\" (UID: \"89863e38e91025150790b0bd74c00fbc\") " pod="kube-system/kube-apiserver-ip-172-31-20-207" Jul 15 04:41:03.008565 kubelet[2963]: I0715 04:41:03.008525 2963 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-207" Jul 15 04:41:03.009882 kubelet[2963]: E0715 04:41:03.009781 2963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.207:6443/api/v1/nodes\": dial tcp 172.31.20.207:6443: connect: connection refused" node="ip-172-31-20-207" Jul 15 04:41:03.076375 containerd[2000]: time="2025-07-15T04:41:03.076221508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-207,Uid:5455a6fec75aeb600595c7e51b6d76e1,Namespace:kube-system,Attempt:0,}" Jul 15 04:41:03.090629 containerd[2000]: time="2025-07-15T04:41:03.090269440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-207,Uid:bdcada75f8537484f13e588f1b9210b7,Namespace:kube-system,Attempt:0,}" Jul 15 04:41:03.109626 containerd[2000]: time="2025-07-15T04:41:03.109562680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-207,Uid:89863e38e91025150790b0bd74c00fbc,Namespace:kube-system,Attempt:0,}" Jul 15 04:41:03.112353 containerd[2000]: time="2025-07-15T04:41:03.112205584Z" level=info msg="connecting to shim 3bbe6ebef381e18ee6751770431d3250ad5ef37df8370caa622691def0fa202d" address="unix:///run/containerd/s/2b884b488971e83ebde03d3d3147a7d474367ffcd48baf1431367ff1335400ae" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:03.177648 containerd[2000]: time="2025-07-15T04:41:03.177560884Z" level=info msg="connecting to shim 9c656914335b05cfe10ec4da902c21aff970af597094b380c3c167c93108060f" address="unix:///run/containerd/s/52edcbbec7fee63f7f4070c7c41b2828cd6dca62da7791e359a2ba712ffe9405" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:03.186669 containerd[2000]: time="2025-07-15T04:41:03.186611392Z" level=info msg="connecting to shim 3bdbda99b4784b633afb8509840f6d50c38f2040e8dd882a42354aa89e484fe3" address="unix:///run/containerd/s/565aa3391c21d5bc163142aa0231700399be76763343dd693b65e4fc19dfb999" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:03.192433 systemd[1]: Started cri-containerd-3bbe6ebef381e18ee6751770431d3250ad5ef37df8370caa622691def0fa202d.scope - libcontainer container 3bbe6ebef381e18ee6751770431d3250ad5ef37df8370caa622691def0fa202d. Jul 15 04:41:03.205341 kubelet[2963]: E0715 04:41:03.205280 2963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-207?timeout=10s\": dial tcp 172.31.20.207:6443: connect: connection refused" interval="800ms" Jul 15 04:41:03.271611 systemd[1]: Started cri-containerd-9c656914335b05cfe10ec4da902c21aff970af597094b380c3c167c93108060f.scope - libcontainer container 9c656914335b05cfe10ec4da902c21aff970af597094b380c3c167c93108060f. Jul 15 04:41:03.289143 systemd[1]: Started cri-containerd-3bdbda99b4784b633afb8509840f6d50c38f2040e8dd882a42354aa89e484fe3.scope - libcontainer container 3bdbda99b4784b633afb8509840f6d50c38f2040e8dd882a42354aa89e484fe3. Jul 15 04:41:03.319225 containerd[2000]: time="2025-07-15T04:41:03.319157909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-207,Uid:5455a6fec75aeb600595c7e51b6d76e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bbe6ebef381e18ee6751770431d3250ad5ef37df8370caa622691def0fa202d\"" Jul 15 04:41:03.333778 containerd[2000]: time="2025-07-15T04:41:03.333372257Z" level=info msg="CreateContainer within sandbox \"3bbe6ebef381e18ee6751770431d3250ad5ef37df8370caa622691def0fa202d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 04:41:03.352317 containerd[2000]: time="2025-07-15T04:41:03.352199273Z" level=info msg="Container e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:03.364765 containerd[2000]: time="2025-07-15T04:41:03.364550297Z" level=info msg="CreateContainer within sandbox \"3bbe6ebef381e18ee6751770431d3250ad5ef37df8370caa622691def0fa202d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b\"" Jul 15 04:41:03.366030 containerd[2000]: time="2025-07-15T04:41:03.365978045Z" level=info msg="StartContainer for \"e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b\"" Jul 15 04:41:03.382719 containerd[2000]: time="2025-07-15T04:41:03.382430957Z" level=info msg="connecting to shim e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b" address="unix:///run/containerd/s/2b884b488971e83ebde03d3d3147a7d474367ffcd48baf1431367ff1335400ae" protocol=ttrpc version=3 Jul 15 04:41:03.411606 kubelet[2963]: E0715 04:41:03.410724 2963 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-207&limit=500&resourceVersion=0\": dial tcp 172.31.20.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 04:41:03.425395 kubelet[2963]: I0715 04:41:03.424882 2963 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-207" Jul 15 04:41:03.427142 kubelet[2963]: E0715 04:41:03.427041 2963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.207:6443/api/v1/nodes\": dial tcp 172.31.20.207:6443: connect: connection refused" node="ip-172-31-20-207" Jul 15 04:41:03.431312 containerd[2000]: time="2025-07-15T04:41:03.431261958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-207,Uid:89863e38e91025150790b0bd74c00fbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bdbda99b4784b633afb8509840f6d50c38f2040e8dd882a42354aa89e484fe3\"" Jul 15 04:41:03.442407 containerd[2000]: time="2025-07-15T04:41:03.442331514Z" level=info msg="CreateContainer within sandbox \"3bdbda99b4784b633afb8509840f6d50c38f2040e8dd882a42354aa89e484fe3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 04:41:03.454584 systemd[1]: Started cri-containerd-e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b.scope - libcontainer container e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b. Jul 15 04:41:03.458970 containerd[2000]: time="2025-07-15T04:41:03.458167914Z" level=info msg="Container bd581ebcdc3fef6f73af9dd79d4088a8791fa555a0da092dd45726db43d6da23: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:03.465724 containerd[2000]: time="2025-07-15T04:41:03.465638286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-207,Uid:bdcada75f8537484f13e588f1b9210b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c656914335b05cfe10ec4da902c21aff970af597094b380c3c167c93108060f\"" Jul 15 04:41:03.475830 containerd[2000]: time="2025-07-15T04:41:03.475748490Z" level=info msg="CreateContainer within sandbox \"9c656914335b05cfe10ec4da902c21aff970af597094b380c3c167c93108060f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 04:41:03.482040 containerd[2000]: time="2025-07-15T04:41:03.481378494Z" level=info msg="CreateContainer within sandbox \"3bdbda99b4784b633afb8509840f6d50c38f2040e8dd882a42354aa89e484fe3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bd581ebcdc3fef6f73af9dd79d4088a8791fa555a0da092dd45726db43d6da23\"" Jul 15 04:41:03.483169 containerd[2000]: time="2025-07-15T04:41:03.483089538Z" level=info msg="StartContainer for \"bd581ebcdc3fef6f73af9dd79d4088a8791fa555a0da092dd45726db43d6da23\"" Jul 15 04:41:03.485080 containerd[2000]: time="2025-07-15T04:41:03.485001834Z" level=info msg="connecting to shim bd581ebcdc3fef6f73af9dd79d4088a8791fa555a0da092dd45726db43d6da23" address="unix:///run/containerd/s/565aa3391c21d5bc163142aa0231700399be76763343dd693b65e4fc19dfb999" protocol=ttrpc version=3 Jul 15 04:41:03.495147 containerd[2000]: time="2025-07-15T04:41:03.494524062Z" level=info msg="Container 244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:03.510865 containerd[2000]: time="2025-07-15T04:41:03.510794694Z" level=info msg="CreateContainer within sandbox \"9c656914335b05cfe10ec4da902c21aff970af597094b380c3c167c93108060f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020\"" Jul 15 04:41:03.513008 containerd[2000]: time="2025-07-15T04:41:03.512863086Z" level=info msg="StartContainer for \"244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020\"" Jul 15 04:41:03.520133 containerd[2000]: time="2025-07-15T04:41:03.519873438Z" level=info msg="connecting to shim 244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020" address="unix:///run/containerd/s/52edcbbec7fee63f7f4070c7c41b2828cd6dca62da7791e359a2ba712ffe9405" protocol=ttrpc version=3 Jul 15 04:41:03.542419 systemd[1]: Started cri-containerd-bd581ebcdc3fef6f73af9dd79d4088a8791fa555a0da092dd45726db43d6da23.scope - libcontainer container bd581ebcdc3fef6f73af9dd79d4088a8791fa555a0da092dd45726db43d6da23. Jul 15 04:41:03.589653 systemd[1]: Started cri-containerd-244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020.scope - libcontainer container 244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020. Jul 15 04:41:03.617317 containerd[2000]: time="2025-07-15T04:41:03.617253462Z" level=info msg="StartContainer for \"e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b\" returns successfully" Jul 15 04:41:03.676909 kubelet[2963]: E0715 04:41:03.676870 2963 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-207\" not found" node="ip-172-31-20-207" Jul 15 04:41:03.680020 kubelet[2963]: E0715 04:41:03.679718 2963 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 04:41:03.701016 containerd[2000]: time="2025-07-15T04:41:03.700850527Z" level=info msg="StartContainer for \"bd581ebcdc3fef6f73af9dd79d4088a8791fa555a0da092dd45726db43d6da23\" returns successfully" Jul 15 04:41:03.808902 containerd[2000]: time="2025-07-15T04:41:03.808841911Z" level=info msg="StartContainer for \"244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020\" returns successfully" Jul 15 04:41:03.831165 kubelet[2963]: E0715 04:41:03.831076 2963 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.207:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 04:41:03.886555 kubelet[2963]: E0715 04:41:03.886408 2963 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.207:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 04:41:04.231209 kubelet[2963]: I0715 04:41:04.229447 2963 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-207" Jul 15 04:41:04.686175 kubelet[2963]: E0715 04:41:04.684825 2963 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-207\" not found" node="ip-172-31-20-207" Jul 15 04:41:04.691553 kubelet[2963]: E0715 04:41:04.691011 2963 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-207\" not found" node="ip-172-31-20-207" Jul 15 04:41:05.605086 update_engine[1974]: I20250715 04:41:05.604156 1974 update_attempter.cc:509] Updating boot flags... Jul 15 04:41:05.699864 kubelet[2963]: E0715 04:41:05.699826 2963 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-207\" not found" node="ip-172-31-20-207" Jul 15 04:41:05.706190 kubelet[2963]: E0715 04:41:05.705287 2963 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-207\" not found" node="ip-172-31-20-207" Jul 15 04:41:06.705256 kubelet[2963]: E0715 04:41:06.705214 2963 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-207\" not found" node="ip-172-31-20-207" Jul 15 04:41:06.708457 kubelet[2963]: E0715 04:41:06.706514 2963 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-207\" not found" node="ip-172-31-20-207" Jul 15 04:41:08.546996 kubelet[2963]: I0715 04:41:08.546705 2963 apiserver.go:52] "Watching apiserver" Jul 15 04:41:08.555800 kubelet[2963]: E0715 04:41:08.555734 2963 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-207\" not found" node="ip-172-31-20-207" Jul 15 04:41:08.591804 kubelet[2963]: I0715 04:41:08.591745 2963 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 04:41:08.629975 kubelet[2963]: I0715 04:41:08.629651 2963 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-207" Jul 15 04:41:08.689298 kubelet[2963]: I0715 04:41:08.689254 2963 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-207" Jul 15 04:41:08.774124 kubelet[2963]: E0715 04:41:08.774035 2963 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-207\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-207" Jul 15 04:41:08.775181 kubelet[2963]: I0715 04:41:08.774091 2963 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-207" Jul 15 04:41:08.788832 kubelet[2963]: E0715 04:41:08.788434 2963 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-207\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-207" Jul 15 04:41:08.788832 kubelet[2963]: I0715 04:41:08.788507 2963 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:08.799301 kubelet[2963]: E0715 04:41:08.799146 2963 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-207\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:11.153297 kubelet[2963]: I0715 04:41:11.153212 2963 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-207" Jul 15 04:41:11.330350 systemd[1]: Reload requested from client PID 3426 ('systemctl') (unit session-9.scope)... Jul 15 04:41:11.330381 systemd[1]: Reloading... Jul 15 04:41:11.527160 zram_generator::config[3473]: No configuration found. Jul 15 04:41:11.716181 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:41:12.024597 systemd[1]: Reloading finished in 693 ms. Jul 15 04:41:12.081543 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:41:12.100685 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 04:41:12.101260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:41:12.101352 systemd[1]: kubelet.service: Consumed 1.936s CPU time, 128.8M memory peak. Jul 15 04:41:12.106007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:41:12.529981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:41:12.545264 (kubelet)[3530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:41:12.653198 kubelet[3530]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:41:12.653198 kubelet[3530]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 04:41:12.653198 kubelet[3530]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:41:12.653198 kubelet[3530]: I0715 04:41:12.652688 3530 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:41:12.667170 kubelet[3530]: I0715 04:41:12.667090 3530 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 04:41:12.667170 kubelet[3530]: I0715 04:41:12.667165 3530 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:41:12.667632 kubelet[3530]: I0715 04:41:12.667591 3530 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 04:41:12.670405 kubelet[3530]: I0715 04:41:12.670300 3530 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 04:41:12.675389 kubelet[3530]: I0715 04:41:12.675322 3530 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:41:12.699884 kubelet[3530]: I0715 04:41:12.698757 3530 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:41:12.706490 kubelet[3530]: I0715 04:41:12.705861 3530 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:41:12.706490 kubelet[3530]: I0715 04:41:12.706333 3530 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:41:12.706706 kubelet[3530]: I0715 04:41:12.706388 3530 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-207","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:41:12.706855 kubelet[3530]: I0715 04:41:12.706722 3530 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:41:12.706855 kubelet[3530]: I0715 04:41:12.706744 3530 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 04:41:12.706855 kubelet[3530]: I0715 04:41:12.706825 3530 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:41:12.707208 kubelet[3530]: I0715 04:41:12.707173 3530 kubelet.go:480] "Attempting to sync node with API server" Jul 15 04:41:12.707396 kubelet[3530]: I0715 04:41:12.707214 3530 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:41:12.707396 kubelet[3530]: I0715 04:41:12.707262 3530 kubelet.go:386] "Adding apiserver pod source" Jul 15 04:41:12.707396 kubelet[3530]: I0715 04:41:12.707294 3530 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:41:12.711169 kubelet[3530]: I0715 04:41:12.711088 3530 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:41:12.712322 kubelet[3530]: I0715 04:41:12.712288 3530 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 04:41:12.716638 kubelet[3530]: I0715 04:41:12.716529 3530 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 04:41:12.716848 kubelet[3530]: I0715 04:41:12.716829 3530 server.go:1289] "Started kubelet" Jul 15 04:41:12.721232 kubelet[3530]: I0715 04:41:12.721196 3530 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:41:12.731979 kubelet[3530]: I0715 04:41:12.731930 3530 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 04:41:12.737241 kubelet[3530]: I0715 04:41:12.737188 3530 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:41:12.746524 kubelet[3530]: I0715 04:41:12.742563 3530 server.go:317] "Adding debug handlers to kubelet server" Jul 15 04:41:12.761971 kubelet[3530]: I0715 04:41:12.761887 3530 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:41:12.762688 kubelet[3530]: I0715 04:41:12.762640 3530 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:41:12.764969 kubelet[3530]: I0715 04:41:12.764871 3530 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:41:12.772093 kubelet[3530]: I0715 04:41:12.772035 3530 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 04:41:12.775156 kubelet[3530]: E0715 04:41:12.774083 3530 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-207\" not found" Jul 15 04:41:12.789325 kubelet[3530]: I0715 04:41:12.765237 3530 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 04:41:12.789325 kubelet[3530]: I0715 04:41:12.788752 3530 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 04:41:12.789325 kubelet[3530]: I0715 04:41:12.789029 3530 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 04:41:12.789325 kubelet[3530]: I0715 04:41:12.789052 3530 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 04:41:12.792325 kubelet[3530]: E0715 04:41:12.792243 3530 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:41:12.793026 kubelet[3530]: I0715 04:41:12.792969 3530 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 04:41:12.812048 kubelet[3530]: I0715 04:41:12.811836 3530 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:41:12.852999 kubelet[3530]: E0715 04:41:12.852875 3530 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:41:12.856397 kubelet[3530]: I0715 04:41:12.853461 3530 factory.go:223] Registration of the containerd container factory successfully Jul 15 04:41:12.856397 kubelet[3530]: I0715 04:41:12.853488 3530 factory.go:223] Registration of the systemd container factory successfully Jul 15 04:41:12.856397 kubelet[3530]: I0715 04:41:12.853638 3530 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:41:12.892590 kubelet[3530]: E0715 04:41:12.892551 3530 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 04:41:12.964442 kubelet[3530]: I0715 04:41:12.964410 3530 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 04:41:12.964632 kubelet[3530]: I0715 04:41:12.964609 3530 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 04:41:12.964742 kubelet[3530]: I0715 04:41:12.964725 3530 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:41:12.965063 kubelet[3530]: I0715 04:41:12.965041 3530 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 04:41:12.965210 kubelet[3530]: I0715 04:41:12.965170 3530 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 04:41:12.965387 kubelet[3530]: I0715 04:41:12.965368 3530 policy_none.go:49] "None policy: Start" Jul 15 04:41:12.965512 kubelet[3530]: I0715 04:41:12.965492 3530 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 04:41:12.965630 kubelet[3530]: I0715 04:41:12.965611 3530 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:41:12.965937 kubelet[3530]: I0715 04:41:12.965914 3530 state_mem.go:75] "Updated machine memory state" Jul 15 04:41:12.976890 kubelet[3530]: E0715 04:41:12.976857 3530 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 04:41:12.978147 kubelet[3530]: I0715 04:41:12.977547 3530 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:41:12.978147 kubelet[3530]: I0715 04:41:12.977578 3530 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:41:12.981360 kubelet[3530]: I0715 04:41:12.981046 3530 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:41:12.989523 kubelet[3530]: E0715 04:41:12.986654 3530 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 04:41:13.096089 kubelet[3530]: I0715 04:41:13.095734 3530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-207" Jul 15 04:41:13.101022 kubelet[3530]: I0715 04:41:13.098914 3530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:13.104493 kubelet[3530]: I0715 04:41:13.104427 3530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-207" Jul 15 04:41:13.113415 kubelet[3530]: I0715 04:41:13.113355 3530 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-207" Jul 15 04:41:13.122919 kubelet[3530]: E0715 04:41:13.122837 3530 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-207\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-207" Jul 15 04:41:13.136836 kubelet[3530]: I0715 04:41:13.136092 3530 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-207" Jul 15 04:41:13.136836 kubelet[3530]: I0715 04:41:13.136233 3530 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-207" Jul 15 04:41:13.215853 kubelet[3530]: I0715 04:41:13.215392 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5455a6fec75aeb600595c7e51b6d76e1-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-207\" (UID: \"5455a6fec75aeb600595c7e51b6d76e1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:13.215853 kubelet[3530]: I0715 04:41:13.215466 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5455a6fec75aeb600595c7e51b6d76e1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-207\" (UID: \"5455a6fec75aeb600595c7e51b6d76e1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:13.215853 kubelet[3530]: I0715 04:41:13.215507 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5455a6fec75aeb600595c7e51b6d76e1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-207\" (UID: \"5455a6fec75aeb600595c7e51b6d76e1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:13.215853 kubelet[3530]: I0715 04:41:13.215541 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5455a6fec75aeb600595c7e51b6d76e1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-207\" (UID: \"5455a6fec75aeb600595c7e51b6d76e1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:13.215853 kubelet[3530]: I0715 04:41:13.215576 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bdcada75f8537484f13e588f1b9210b7-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-207\" (UID: \"bdcada75f8537484f13e588f1b9210b7\") " pod="kube-system/kube-scheduler-ip-172-31-20-207" Jul 15 04:41:13.216257 kubelet[3530]: I0715 04:41:13.215610 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89863e38e91025150790b0bd74c00fbc-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-207\" (UID: \"89863e38e91025150790b0bd74c00fbc\") " pod="kube-system/kube-apiserver-ip-172-31-20-207" Jul 15 04:41:13.216257 kubelet[3530]: I0715 04:41:13.215643 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89863e38e91025150790b0bd74c00fbc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-207\" (UID: \"89863e38e91025150790b0bd74c00fbc\") " pod="kube-system/kube-apiserver-ip-172-31-20-207" Jul 15 04:41:13.216257 kubelet[3530]: I0715 04:41:13.215684 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5455a6fec75aeb600595c7e51b6d76e1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-207\" (UID: \"5455a6fec75aeb600595c7e51b6d76e1\") " pod="kube-system/kube-controller-manager-ip-172-31-20-207" Jul 15 04:41:13.216257 kubelet[3530]: I0715 04:41:13.215718 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89863e38e91025150790b0bd74c00fbc-ca-certs\") pod \"kube-apiserver-ip-172-31-20-207\" (UID: \"89863e38e91025150790b0bd74c00fbc\") " pod="kube-system/kube-apiserver-ip-172-31-20-207" Jul 15 04:41:13.709185 kubelet[3530]: I0715 04:41:13.708966 3530 apiserver.go:52] "Watching apiserver" Jul 15 04:41:13.793498 kubelet[3530]: I0715 04:41:13.793422 3530 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 04:41:13.911147 kubelet[3530]: I0715 04:41:13.909721 3530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-207" Jul 15 04:41:13.926657 kubelet[3530]: E0715 04:41:13.926566 3530 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-207\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-207" Jul 15 04:41:13.959807 kubelet[3530]: I0715 04:41:13.958242 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-207" podStartSLOduration=2.9582199620000003 podStartE2EDuration="2.958219962s" podCreationTimestamp="2025-07-15 04:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:41:13.956608926 +0000 UTC m=+1.401625400" watchObservedRunningTime="2025-07-15 04:41:13.958219962 +0000 UTC m=+1.403236436" Jul 15 04:41:14.003633 kubelet[3530]: I0715 04:41:14.003529 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-207" podStartSLOduration=1.003505622 podStartE2EDuration="1.003505622s" podCreationTimestamp="2025-07-15 04:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:41:13.976552302 +0000 UTC m=+1.421568824" watchObservedRunningTime="2025-07-15 04:41:14.003505622 +0000 UTC m=+1.448522060" Jul 15 04:41:16.617203 kubelet[3530]: I0715 04:41:16.616909 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-207" podStartSLOduration=3.616886815 podStartE2EDuration="3.616886815s" podCreationTimestamp="2025-07-15 04:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:41:14.005701154 +0000 UTC m=+1.450717688" watchObservedRunningTime="2025-07-15 04:41:16.616886815 +0000 UTC m=+4.061903253" Jul 15 04:41:17.057138 kubelet[3530]: I0715 04:41:17.057040 3530 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 04:41:17.058535 containerd[2000]: time="2025-07-15T04:41:17.058479401Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 04:41:17.060012 kubelet[3530]: I0715 04:41:17.059909 3530 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 04:41:17.652979 systemd[1]: Created slice kubepods-besteffort-pod56b8a1e2_1d7b_4c67_a211_26f62f5ea9a7.slice - libcontainer container kubepods-besteffort-pod56b8a1e2_1d7b_4c67_a211_26f62f5ea9a7.slice. Jul 15 04:41:17.748927 kubelet[3530]: I0715 04:41:17.748776 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7-kube-proxy\") pod \"kube-proxy-bcmc6\" (UID: \"56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7\") " pod="kube-system/kube-proxy-bcmc6" Jul 15 04:41:17.749786 kubelet[3530]: I0715 04:41:17.749559 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7-xtables-lock\") pod \"kube-proxy-bcmc6\" (UID: \"56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7\") " pod="kube-system/kube-proxy-bcmc6" Jul 15 04:41:17.749786 kubelet[3530]: I0715 04:41:17.749641 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7-lib-modules\") pod \"kube-proxy-bcmc6\" (UID: \"56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7\") " pod="kube-system/kube-proxy-bcmc6" Jul 15 04:41:17.749786 kubelet[3530]: I0715 04:41:17.749685 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rghm\" (UniqueName: \"kubernetes.io/projected/56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7-kube-api-access-6rghm\") pod \"kube-proxy-bcmc6\" (UID: \"56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7\") " pod="kube-system/kube-proxy-bcmc6" Jul 15 04:41:17.864692 kubelet[3530]: E0715 04:41:17.864566 3530 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 15 04:41:17.864692 kubelet[3530]: E0715 04:41:17.864612 3530 projected.go:194] Error preparing data for projected volume kube-api-access-6rghm for pod kube-system/kube-proxy-bcmc6: configmap "kube-root-ca.crt" not found Jul 15 04:41:17.866894 kubelet[3530]: E0715 04:41:17.864728 3530 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7-kube-api-access-6rghm podName:56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7 nodeName:}" failed. No retries permitted until 2025-07-15 04:41:18.364691437 +0000 UTC m=+5.809707875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6rghm" (UniqueName: "kubernetes.io/projected/56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7-kube-api-access-6rghm") pod "kube-proxy-bcmc6" (UID: "56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7") : configmap "kube-root-ca.crt" not found Jul 15 04:41:18.303247 systemd[1]: Created slice kubepods-besteffort-pod64384fb1_3e95_47c3_ab73_40a2f87cd085.slice - libcontainer container kubepods-besteffort-pod64384fb1_3e95_47c3_ab73_40a2f87cd085.slice. Jul 15 04:41:18.355861 kubelet[3530]: I0715 04:41:18.355723 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64384fb1-3e95-47c3-ab73-40a2f87cd085-var-lib-calico\") pod \"tigera-operator-747864d56d-gpg5v\" (UID: \"64384fb1-3e95-47c3-ab73-40a2f87cd085\") " pod="tigera-operator/tigera-operator-747864d56d-gpg5v" Jul 15 04:41:18.356034 kubelet[3530]: I0715 04:41:18.355904 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpxgd\" (UniqueName: \"kubernetes.io/projected/64384fb1-3e95-47c3-ab73-40a2f87cd085-kube-api-access-dpxgd\") pod \"tigera-operator-747864d56d-gpg5v\" (UID: \"64384fb1-3e95-47c3-ab73-40a2f87cd085\") " pod="tigera-operator/tigera-operator-747864d56d-gpg5v" Jul 15 04:41:18.568630 containerd[2000]: time="2025-07-15T04:41:18.568410417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bcmc6,Uid:56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7,Namespace:kube-system,Attempt:0,}" Jul 15 04:41:18.608132 containerd[2000]: time="2025-07-15T04:41:18.608003589Z" level=info msg="connecting to shim 8a44ea32e743d84d3e68823ce43df933f0b7b8218f41cfe8cb77b372ae308678" address="unix:///run/containerd/s/d4462106dc81a3c9a5e5c70d01f3574f2ee5f296ed27a27169c6ff6b9f64573a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:18.613700 containerd[2000]: time="2025-07-15T04:41:18.613560861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-gpg5v,Uid:64384fb1-3e95-47c3-ab73-40a2f87cd085,Namespace:tigera-operator,Attempt:0,}" Jul 15 04:41:18.684660 systemd[1]: Started cri-containerd-8a44ea32e743d84d3e68823ce43df933f0b7b8218f41cfe8cb77b372ae308678.scope - libcontainer container 8a44ea32e743d84d3e68823ce43df933f0b7b8218f41cfe8cb77b372ae308678. Jul 15 04:41:18.686824 containerd[2000]: time="2025-07-15T04:41:18.686752785Z" level=info msg="connecting to shim 32fea0b7869917c7423c44ad822809b0047e5a1127ef039b5b907136d9c32def" address="unix:///run/containerd/s/6e6df6ca0da2298a0aaceb7626455687a366ffdf27cce2ed049a697c83c48db3" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:18.748616 systemd[1]: Started cri-containerd-32fea0b7869917c7423c44ad822809b0047e5a1127ef039b5b907136d9c32def.scope - libcontainer container 32fea0b7869917c7423c44ad822809b0047e5a1127ef039b5b907136d9c32def. Jul 15 04:41:18.772957 containerd[2000]: time="2025-07-15T04:41:18.772782742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bcmc6,Uid:56b8a1e2-1d7b-4c67-a211-26f62f5ea9a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a44ea32e743d84d3e68823ce43df933f0b7b8218f41cfe8cb77b372ae308678\"" Jul 15 04:41:18.788144 containerd[2000]: time="2025-07-15T04:41:18.787681570Z" level=info msg="CreateContainer within sandbox \"8a44ea32e743d84d3e68823ce43df933f0b7b8218f41cfe8cb77b372ae308678\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 04:41:18.816238 containerd[2000]: time="2025-07-15T04:41:18.816073486Z" level=info msg="Container 199c4fb0dd087614ca2114fea8f99d74dcadf95605b0214c460b34e2407be782: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:18.836746 containerd[2000]: time="2025-07-15T04:41:18.835948558Z" level=info msg="CreateContainer within sandbox \"8a44ea32e743d84d3e68823ce43df933f0b7b8218f41cfe8cb77b372ae308678\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"199c4fb0dd087614ca2114fea8f99d74dcadf95605b0214c460b34e2407be782\"" Jul 15 04:41:18.840187 containerd[2000]: time="2025-07-15T04:41:18.839502694Z" level=info msg="StartContainer for \"199c4fb0dd087614ca2114fea8f99d74dcadf95605b0214c460b34e2407be782\"" Jul 15 04:41:18.853779 containerd[2000]: time="2025-07-15T04:41:18.853689550Z" level=info msg="connecting to shim 199c4fb0dd087614ca2114fea8f99d74dcadf95605b0214c460b34e2407be782" address="unix:///run/containerd/s/d4462106dc81a3c9a5e5c70d01f3574f2ee5f296ed27a27169c6ff6b9f64573a" protocol=ttrpc version=3 Jul 15 04:41:18.894909 containerd[2000]: time="2025-07-15T04:41:18.894843634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-gpg5v,Uid:64384fb1-3e95-47c3-ab73-40a2f87cd085,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"32fea0b7869917c7423c44ad822809b0047e5a1127ef039b5b907136d9c32def\"" Jul 15 04:41:18.901346 containerd[2000]: time="2025-07-15T04:41:18.901296226Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 04:41:18.922844 systemd[1]: Started cri-containerd-199c4fb0dd087614ca2114fea8f99d74dcadf95605b0214c460b34e2407be782.scope - libcontainer container 199c4fb0dd087614ca2114fea8f99d74dcadf95605b0214c460b34e2407be782. Jul 15 04:41:19.030617 containerd[2000]: time="2025-07-15T04:41:19.030448075Z" level=info msg="StartContainer for \"199c4fb0dd087614ca2114fea8f99d74dcadf95605b0214c460b34e2407be782\" returns successfully" Jul 15 04:41:20.334255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4119471829.mount: Deactivated successfully. Jul 15 04:41:20.573866 kubelet[3530]: I0715 04:41:20.572959 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bcmc6" podStartSLOduration=3.572935163 podStartE2EDuration="3.572935163s" podCreationTimestamp="2025-07-15 04:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:41:19.965778888 +0000 UTC m=+7.410795350" watchObservedRunningTime="2025-07-15 04:41:20.572935163 +0000 UTC m=+8.017951601" Jul 15 04:41:21.365578 containerd[2000]: time="2025-07-15T04:41:21.365494607Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:21.367491 containerd[2000]: time="2025-07-15T04:41:21.367413683Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 15 04:41:21.369926 containerd[2000]: time="2025-07-15T04:41:21.369848543Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:21.384555 containerd[2000]: time="2025-07-15T04:41:21.384378167Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:21.387243 containerd[2000]: time="2025-07-15T04:41:21.387165839Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.485023937s" Jul 15 04:41:21.387243 containerd[2000]: time="2025-07-15T04:41:21.387242639Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 15 04:41:21.397376 containerd[2000]: time="2025-07-15T04:41:21.397261163Z" level=info msg="CreateContainer within sandbox \"32fea0b7869917c7423c44ad822809b0047e5a1127ef039b5b907136d9c32def\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 04:41:21.417490 containerd[2000]: time="2025-07-15T04:41:21.417402587Z" level=info msg="Container 1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:21.434465 containerd[2000]: time="2025-07-15T04:41:21.434356547Z" level=info msg="CreateContainer within sandbox \"32fea0b7869917c7423c44ad822809b0047e5a1127ef039b5b907136d9c32def\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590\"" Jul 15 04:41:21.435946 containerd[2000]: time="2025-07-15T04:41:21.435509171Z" level=info msg="StartContainer for \"1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590\"" Jul 15 04:41:21.438457 containerd[2000]: time="2025-07-15T04:41:21.438401639Z" level=info msg="connecting to shim 1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590" address="unix:///run/containerd/s/6e6df6ca0da2298a0aaceb7626455687a366ffdf27cce2ed049a697c83c48db3" protocol=ttrpc version=3 Jul 15 04:41:21.484509 systemd[1]: Started cri-containerd-1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590.scope - libcontainer container 1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590. Jul 15 04:41:21.565901 containerd[2000]: time="2025-07-15T04:41:21.565800276Z" level=info msg="StartContainer for \"1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590\" returns successfully" Jul 15 04:41:21.980156 kubelet[3530]: I0715 04:41:21.980000 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-gpg5v" podStartSLOduration=1.4908427770000001 podStartE2EDuration="3.979974158s" podCreationTimestamp="2025-07-15 04:41:18 +0000 UTC" firstStartedPulling="2025-07-15 04:41:18.89958349 +0000 UTC m=+6.344599940" lastFinishedPulling="2025-07-15 04:41:21.388714883 +0000 UTC m=+8.833731321" observedRunningTime="2025-07-15 04:41:21.979474478 +0000 UTC m=+9.424490988" watchObservedRunningTime="2025-07-15 04:41:21.979974158 +0000 UTC m=+9.424990608" Jul 15 04:41:30.834761 sudo[2382]: pam_unix(sudo:session): session closed for user root Jul 15 04:41:30.858727 sshd[2381]: Connection closed by 139.178.89.65 port 57040 Jul 15 04:41:30.859443 sshd-session[2378]: pam_unix(sshd:session): session closed for user core Jul 15 04:41:30.871281 systemd[1]: sshd@8-172.31.20.207:22-139.178.89.65:57040.service: Deactivated successfully. Jul 15 04:41:30.879732 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 04:41:30.880712 systemd[1]: session-9.scope: Consumed 12.404s CPU time, 223.6M memory peak. Jul 15 04:41:30.884787 systemd-logind[1973]: Session 9 logged out. Waiting for processes to exit. Jul 15 04:41:30.888414 systemd-logind[1973]: Removed session 9. Jul 15 04:41:40.468917 systemd[1]: Created slice kubepods-besteffort-pod9569fa53_6385_4f91_9ca8_5c98a073feca.slice - libcontainer container kubepods-besteffort-pod9569fa53_6385_4f91_9ca8_5c98a073feca.slice. Jul 15 04:41:40.495150 kubelet[3530]: I0715 04:41:40.495009 3530 status_manager.go:895] "Failed to get status for pod" podUID="9569fa53-6385-4f91-9ca8-5c98a073feca" pod="calico-system/calico-typha-9bd566bd6-7qx69" err="pods \"calico-typha-9bd566bd6-7qx69\" is forbidden: User \"system:node:ip-172-31-20-207\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-20-207' and this object" Jul 15 04:41:40.506767 kubelet[3530]: I0715 04:41:40.506710 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9569fa53-6385-4f91-9ca8-5c98a073feca-tigera-ca-bundle\") pod \"calico-typha-9bd566bd6-7qx69\" (UID: \"9569fa53-6385-4f91-9ca8-5c98a073feca\") " pod="calico-system/calico-typha-9bd566bd6-7qx69" Jul 15 04:41:40.506946 kubelet[3530]: I0715 04:41:40.506810 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9569fa53-6385-4f91-9ca8-5c98a073feca-typha-certs\") pod \"calico-typha-9bd566bd6-7qx69\" (UID: \"9569fa53-6385-4f91-9ca8-5c98a073feca\") " pod="calico-system/calico-typha-9bd566bd6-7qx69" Jul 15 04:41:40.509136 kubelet[3530]: I0715 04:41:40.507821 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztn6g\" (UniqueName: \"kubernetes.io/projected/9569fa53-6385-4f91-9ca8-5c98a073feca-kube-api-access-ztn6g\") pod \"calico-typha-9bd566bd6-7qx69\" (UID: \"9569fa53-6385-4f91-9ca8-5c98a073feca\") " pod="calico-system/calico-typha-9bd566bd6-7qx69" Jul 15 04:41:40.781653 containerd[2000]: time="2025-07-15T04:41:40.781188343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9bd566bd6-7qx69,Uid:9569fa53-6385-4f91-9ca8-5c98a073feca,Namespace:calico-system,Attempt:0,}" Jul 15 04:41:40.845045 containerd[2000]: time="2025-07-15T04:41:40.844397647Z" level=info msg="connecting to shim c9cf82e0bb8b18b1e7494d157a1d8c161e0b2b4204f279a45637c9b6a19136c2" address="unix:///run/containerd/s/bb6b86635ea3533e90706cc5f70e0bdb1424fb4cd4e3d163a967b38cb3105eda" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:40.875282 systemd[1]: Created slice kubepods-besteffort-pod67ca6a4f_f8ba_4255_8a0b_ecf3f8142aca.slice - libcontainer container kubepods-besteffort-pod67ca6a4f_f8ba_4255_8a0b_ecf3f8142aca.slice. Jul 15 04:41:40.911592 kubelet[3530]: I0715 04:41:40.911054 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca-xtables-lock\") pod \"calico-node-dwc8z\" (UID: \"67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca\") " pod="calico-system/calico-node-dwc8z" Jul 15 04:41:40.911760 kubelet[3530]: I0715 04:41:40.911666 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca-flexvol-driver-host\") pod \"calico-node-dwc8z\" (UID: \"67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca\") " pod="calico-system/calico-node-dwc8z" Jul 15 04:41:40.911817 kubelet[3530]: I0715 04:41:40.911786 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssqlc\" (UniqueName: \"kubernetes.io/projected/67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca-kube-api-access-ssqlc\") pod \"calico-node-dwc8z\" (UID: \"67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca\") " pod="calico-system/calico-node-dwc8z" Jul 15 04:41:40.912558 kubelet[3530]: I0715 04:41:40.911912 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca-tigera-ca-bundle\") pod \"calico-node-dwc8z\" (UID: \"67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca\") " pod="calico-system/calico-node-dwc8z" Jul 15 04:41:40.912558 kubelet[3530]: I0715 04:41:40.912027 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca-cni-bin-dir\") pod \"calico-node-dwc8z\" (UID: \"67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca\") " pod="calico-system/calico-node-dwc8z" Jul 15 04:41:40.912558 kubelet[3530]: I0715 04:41:40.912151 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca-cni-net-dir\") pod \"calico-node-dwc8z\" (UID: \"67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca\") " pod="calico-system/calico-node-dwc8z" Jul 15 04:41:40.912558 kubelet[3530]: I0715 04:41:40.912271 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca-cni-log-dir\") pod \"calico-node-dwc8z\" (UID: \"67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca\") " pod="calico-system/calico-node-dwc8z" Jul 15 04:41:40.912558 kubelet[3530]: I0715 04:41:40.912372 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca-node-certs\") pod \"calico-node-dwc8z\" (UID: \"67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca\") " pod="calico-system/calico-node-dwc8z" Jul 15 04:41:40.916294 kubelet[3530]: I0715 04:41:40.916225 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca-policysync\") pod \"calico-node-dwc8z\" (UID: \"67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca\") " pod="calico-system/calico-node-dwc8z" Jul 15 04:41:40.916440 kubelet[3530]: I0715 04:41:40.916342 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca-var-lib-calico\") pod \"calico-node-dwc8z\" (UID: \"67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca\") " pod="calico-system/calico-node-dwc8z" Jul 15 04:41:40.916440 kubelet[3530]: I0715 04:41:40.916404 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca-var-run-calico\") pod \"calico-node-dwc8z\" (UID: \"67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca\") " pod="calico-system/calico-node-dwc8z" Jul 15 04:41:40.916571 kubelet[3530]: I0715 04:41:40.916444 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca-lib-modules\") pod \"calico-node-dwc8z\" (UID: \"67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca\") " pod="calico-system/calico-node-dwc8z" Jul 15 04:41:40.935468 systemd[1]: Started cri-containerd-c9cf82e0bb8b18b1e7494d157a1d8c161e0b2b4204f279a45637c9b6a19136c2.scope - libcontainer container c9cf82e0bb8b18b1e7494d157a1d8c161e0b2b4204f279a45637c9b6a19136c2. Jul 15 04:41:41.023966 kubelet[3530]: E0715 04:41:41.023895 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.023966 kubelet[3530]: W0715 04:41:41.023942 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.024887 kubelet[3530]: E0715 04:41:41.023981 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.025311 kubelet[3530]: E0715 04:41:41.025251 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.025311 kubelet[3530]: W0715 04:41:41.025290 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.026323 kubelet[3530]: E0715 04:41:41.025323 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.052224 kubelet[3530]: E0715 04:41:41.052074 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.052224 kubelet[3530]: W0715 04:41:41.052143 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.052224 kubelet[3530]: E0715 04:41:41.052179 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.059988 kubelet[3530]: E0715 04:41:41.059936 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.059988 kubelet[3530]: W0715 04:41:41.059978 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.060187 kubelet[3530]: E0715 04:41:41.060013 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.183145 kubelet[3530]: E0715 04:41:41.182817 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdwd" podUID="0076db00-c9aa-49c4-be93-9c703fd23cc9" Jul 15 04:41:41.189139 containerd[2000]: time="2025-07-15T04:41:41.188321153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dwc8z,Uid:67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca,Namespace:calico-system,Attempt:0,}" Jul 15 04:41:41.206672 containerd[2000]: time="2025-07-15T04:41:41.206600657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9bd566bd6-7qx69,Uid:9569fa53-6385-4f91-9ca8-5c98a073feca,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9cf82e0bb8b18b1e7494d157a1d8c161e0b2b4204f279a45637c9b6a19136c2\"" Jul 15 04:41:41.214040 containerd[2000]: time="2025-07-15T04:41:41.213971585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 04:41:41.263514 containerd[2000]: time="2025-07-15T04:41:41.262690169Z" level=info msg="connecting to shim f3a7aa3ddbcf6158b95f50ba2441c248685066a3173541e5478cd8f14fcdd355" address="unix:///run/containerd/s/2baf18e559b59acde58e0452a216d234829ebd4ea747ab0f97d7aec92c530bd6" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:41:41.274347 kubelet[3530]: E0715 04:41:41.274297 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.274347 kubelet[3530]: W0715 04:41:41.274337 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.274559 kubelet[3530]: E0715 04:41:41.274370 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.275652 kubelet[3530]: E0715 04:41:41.275601 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.275813 kubelet[3530]: W0715 04:41:41.275641 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.275813 kubelet[3530]: E0715 04:41:41.275717 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.276499 kubelet[3530]: E0715 04:41:41.276450 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.276499 kubelet[3530]: W0715 04:41:41.276485 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.276640 kubelet[3530]: E0715 04:41:41.276515 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.278239 kubelet[3530]: E0715 04:41:41.278080 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.278239 kubelet[3530]: W0715 04:41:41.278171 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.278239 kubelet[3530]: E0715 04:41:41.278206 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.278781 kubelet[3530]: E0715 04:41:41.278666 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.278781 kubelet[3530]: W0715 04:41:41.278691 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.278781 kubelet[3530]: E0715 04:41:41.278718 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.279857 kubelet[3530]: E0715 04:41:41.279631 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.279857 kubelet[3530]: W0715 04:41:41.279669 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.279857 kubelet[3530]: E0715 04:41:41.279700 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.281145 kubelet[3530]: E0715 04:41:41.280874 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.281145 kubelet[3530]: W0715 04:41:41.280899 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.281145 kubelet[3530]: E0715 04:41:41.280929 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.282382 kubelet[3530]: E0715 04:41:41.282321 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.282382 kubelet[3530]: W0715 04:41:41.282369 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.282900 kubelet[3530]: E0715 04:41:41.282402 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.283761 kubelet[3530]: E0715 04:41:41.283304 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.283761 kubelet[3530]: W0715 04:41:41.283332 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.283761 kubelet[3530]: E0715 04:41:41.283362 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.284571 kubelet[3530]: E0715 04:41:41.284522 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.284571 kubelet[3530]: W0715 04:41:41.284559 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.284746 kubelet[3530]: E0715 04:41:41.284592 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.285164 kubelet[3530]: E0715 04:41:41.284872 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.285164 kubelet[3530]: W0715 04:41:41.284901 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.285164 kubelet[3530]: E0715 04:41:41.284925 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.286127 kubelet[3530]: E0715 04:41:41.285612 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.286127 kubelet[3530]: W0715 04:41:41.285648 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.286127 kubelet[3530]: E0715 04:41:41.285682 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.287052 kubelet[3530]: E0715 04:41:41.286855 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.287052 kubelet[3530]: W0715 04:41:41.286891 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.287052 kubelet[3530]: E0715 04:41:41.286924 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.288123 kubelet[3530]: E0715 04:41:41.287351 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.288123 kubelet[3530]: W0715 04:41:41.287373 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.288123 kubelet[3530]: E0715 04:41:41.287397 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.288123 kubelet[3530]: E0715 04:41:41.287690 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.288123 kubelet[3530]: W0715 04:41:41.287710 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.288123 kubelet[3530]: E0715 04:41:41.287731 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.289364 kubelet[3530]: E0715 04:41:41.288451 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.289364 kubelet[3530]: W0715 04:41:41.288476 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.289364 kubelet[3530]: E0715 04:41:41.288505 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.290465 kubelet[3530]: E0715 04:41:41.290414 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.290465 kubelet[3530]: W0715 04:41:41.290453 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.290692 kubelet[3530]: E0715 04:41:41.290487 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.290865 kubelet[3530]: E0715 04:41:41.290831 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.290865 kubelet[3530]: W0715 04:41:41.290858 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.291000 kubelet[3530]: E0715 04:41:41.290880 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.292464 kubelet[3530]: E0715 04:41:41.292412 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.292464 kubelet[3530]: W0715 04:41:41.292451 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.292807 kubelet[3530]: E0715 04:41:41.292484 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.293068 kubelet[3530]: E0715 04:41:41.293030 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.293068 kubelet[3530]: W0715 04:41:41.293061 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.293372 kubelet[3530]: E0715 04:41:41.293088 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.320797 kubelet[3530]: E0715 04:41:41.320068 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.320797 kubelet[3530]: W0715 04:41:41.320224 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.320797 kubelet[3530]: E0715 04:41:41.320261 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.320797 kubelet[3530]: I0715 04:41:41.320320 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fptqc\" (UniqueName: \"kubernetes.io/projected/0076db00-c9aa-49c4-be93-9c703fd23cc9-kube-api-access-fptqc\") pod \"csi-node-driver-ghdwd\" (UID: \"0076db00-c9aa-49c4-be93-9c703fd23cc9\") " pod="calico-system/csi-node-driver-ghdwd" Jul 15 04:41:41.324267 kubelet[3530]: E0715 04:41:41.324017 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.324267 kubelet[3530]: W0715 04:41:41.324057 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.324267 kubelet[3530]: E0715 04:41:41.324090 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.324267 kubelet[3530]: I0715 04:41:41.324177 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0076db00-c9aa-49c4-be93-9c703fd23cc9-kubelet-dir\") pod \"csi-node-driver-ghdwd\" (UID: \"0076db00-c9aa-49c4-be93-9c703fd23cc9\") " pod="calico-system/csi-node-driver-ghdwd" Jul 15 04:41:41.325027 kubelet[3530]: E0715 04:41:41.324819 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.325027 kubelet[3530]: W0715 04:41:41.324851 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.325027 kubelet[3530]: E0715 04:41:41.324883 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.326996 kubelet[3530]: E0715 04:41:41.326358 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.326996 kubelet[3530]: W0715 04:41:41.326401 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.326996 kubelet[3530]: E0715 04:41:41.326433 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.326996 kubelet[3530]: E0715 04:41:41.326955 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.327396 kubelet[3530]: W0715 04:41:41.327035 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.327396 kubelet[3530]: E0715 04:41:41.327136 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.327800 kubelet[3530]: I0715 04:41:41.327329 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0076db00-c9aa-49c4-be93-9c703fd23cc9-varrun\") pod \"csi-node-driver-ghdwd\" (UID: \"0076db00-c9aa-49c4-be93-9c703fd23cc9\") " pod="calico-system/csi-node-driver-ghdwd" Jul 15 04:41:41.328511 kubelet[3530]: E0715 04:41:41.328450 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.328511 kubelet[3530]: W0715 04:41:41.328490 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.328511 kubelet[3530]: E0715 04:41:41.328522 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.330285 kubelet[3530]: E0715 04:41:41.330228 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.330285 kubelet[3530]: W0715 04:41:41.330271 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.330569 kubelet[3530]: E0715 04:41:41.330306 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.331520 kubelet[3530]: E0715 04:41:41.331477 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.331520 kubelet[3530]: W0715 04:41:41.331515 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.332402 kubelet[3530]: E0715 04:41:41.331548 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.332402 kubelet[3530]: I0715 04:41:41.331607 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0076db00-c9aa-49c4-be93-9c703fd23cc9-registration-dir\") pod \"csi-node-driver-ghdwd\" (UID: \"0076db00-c9aa-49c4-be93-9c703fd23cc9\") " pod="calico-system/csi-node-driver-ghdwd" Jul 15 04:41:41.333685 kubelet[3530]: E0715 04:41:41.333545 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.334232 kubelet[3530]: W0715 04:41:41.333934 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.334232 kubelet[3530]: E0715 04:41:41.333984 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.335693 kubelet[3530]: E0715 04:41:41.335492 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.336692 kubelet[3530]: W0715 04:41:41.336203 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.336692 kubelet[3530]: E0715 04:41:41.336253 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.338606 kubelet[3530]: E0715 04:41:41.338558 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.339058 kubelet[3530]: W0715 04:41:41.338773 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.339058 kubelet[3530]: E0715 04:41:41.338813 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.339058 kubelet[3530]: I0715 04:41:41.338896 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0076db00-c9aa-49c4-be93-9c703fd23cc9-socket-dir\") pod \"csi-node-driver-ghdwd\" (UID: \"0076db00-c9aa-49c4-be93-9c703fd23cc9\") " pod="calico-system/csi-node-driver-ghdwd" Jul 15 04:41:41.340545 kubelet[3530]: E0715 04:41:41.340327 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.340892 kubelet[3530]: W0715 04:41:41.340472 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.340892 kubelet[3530]: E0715 04:41:41.340740 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.343688 kubelet[3530]: E0715 04:41:41.343476 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.344655 kubelet[3530]: W0715 04:41:41.344188 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.344655 kubelet[3530]: E0715 04:41:41.344253 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.346343 kubelet[3530]: E0715 04:41:41.346220 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.347008 kubelet[3530]: W0715 04:41:41.346592 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.347008 kubelet[3530]: E0715 04:41:41.346639 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.348609 kubelet[3530]: E0715 04:41:41.348412 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.348609 kubelet[3530]: W0715 04:41:41.348523 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.348609 kubelet[3530]: E0715 04:41:41.348555 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.380789 systemd[1]: Started cri-containerd-f3a7aa3ddbcf6158b95f50ba2441c248685066a3173541e5478cd8f14fcdd355.scope - libcontainer container f3a7aa3ddbcf6158b95f50ba2441c248685066a3173541e5478cd8f14fcdd355. Jul 15 04:41:41.441294 kubelet[3530]: E0715 04:41:41.441239 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.441294 kubelet[3530]: W0715 04:41:41.441282 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.441822 kubelet[3530]: E0715 04:41:41.441316 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.442865 kubelet[3530]: E0715 04:41:41.442808 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.442865 kubelet[3530]: W0715 04:41:41.442850 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.443504 kubelet[3530]: E0715 04:41:41.442884 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.444395 kubelet[3530]: E0715 04:41:41.444346 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.444395 kubelet[3530]: W0715 04:41:41.444385 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.444395 kubelet[3530]: E0715 04:41:41.444417 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.446084 kubelet[3530]: E0715 04:41:41.445848 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.446084 kubelet[3530]: W0715 04:41:41.445888 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.446084 kubelet[3530]: E0715 04:41:41.445920 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.447017 kubelet[3530]: E0715 04:41:41.446447 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.447017 kubelet[3530]: W0715 04:41:41.446484 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.447017 kubelet[3530]: E0715 04:41:41.446518 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.447568 kubelet[3530]: E0715 04:41:41.447261 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.447568 kubelet[3530]: W0715 04:41:41.447287 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.447568 kubelet[3530]: E0715 04:41:41.447315 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.448682 kubelet[3530]: E0715 04:41:41.448634 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.448682 kubelet[3530]: W0715 04:41:41.448672 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.448682 kubelet[3530]: E0715 04:41:41.448703 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.449234 kubelet[3530]: E0715 04:41:41.449070 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.450213 kubelet[3530]: W0715 04:41:41.450148 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.450497 kubelet[3530]: E0715 04:41:41.450217 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.450722 kubelet[3530]: E0715 04:41:41.450686 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.450722 kubelet[3530]: W0715 04:41:41.450717 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.451419 kubelet[3530]: E0715 04:41:41.450743 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.451772 kubelet[3530]: E0715 04:41:41.451666 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.451772 kubelet[3530]: W0715 04:41:41.451704 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.451772 kubelet[3530]: E0715 04:41:41.451735 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.452593 kubelet[3530]: E0715 04:41:41.452547 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.452593 kubelet[3530]: W0715 04:41:41.452582 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.452988 kubelet[3530]: E0715 04:41:41.452615 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.453898 kubelet[3530]: E0715 04:41:41.453837 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.453898 kubelet[3530]: W0715 04:41:41.453877 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.454487 kubelet[3530]: E0715 04:41:41.453909 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.455336 kubelet[3530]: E0715 04:41:41.455285 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.455336 kubelet[3530]: W0715 04:41:41.455324 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.455529 kubelet[3530]: E0715 04:41:41.455357 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.456407 kubelet[3530]: E0715 04:41:41.456357 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.456407 kubelet[3530]: W0715 04:41:41.456394 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.456618 kubelet[3530]: E0715 04:41:41.456425 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.458806 kubelet[3530]: E0715 04:41:41.458756 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.458806 kubelet[3530]: W0715 04:41:41.458794 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.459027 kubelet[3530]: E0715 04:41:41.458824 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.459335 kubelet[3530]: E0715 04:41:41.459296 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.459335 kubelet[3530]: W0715 04:41:41.459327 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.459487 kubelet[3530]: E0715 04:41:41.459359 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.459812 kubelet[3530]: E0715 04:41:41.459763 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.459812 kubelet[3530]: W0715 04:41:41.459796 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.459950 kubelet[3530]: E0715 04:41:41.459822 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.460328 kubelet[3530]: E0715 04:41:41.460289 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.460328 kubelet[3530]: W0715 04:41:41.460319 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.460470 kubelet[3530]: E0715 04:41:41.460344 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.461640 kubelet[3530]: E0715 04:41:41.461593 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.461640 kubelet[3530]: W0715 04:41:41.461629 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.461889 kubelet[3530]: E0715 04:41:41.461660 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.463413 kubelet[3530]: E0715 04:41:41.463361 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.463413 kubelet[3530]: W0715 04:41:41.463405 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.463626 kubelet[3530]: E0715 04:41:41.463438 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.463905 kubelet[3530]: E0715 04:41:41.463796 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.463905 kubelet[3530]: W0715 04:41:41.463825 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.463905 kubelet[3530]: E0715 04:41:41.463847 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.464319 kubelet[3530]: E0715 04:41:41.464155 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.464319 kubelet[3530]: W0715 04:41:41.464174 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.464319 kubelet[3530]: E0715 04:41:41.464195 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.464756 kubelet[3530]: E0715 04:41:41.464688 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.464756 kubelet[3530]: W0715 04:41:41.464723 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.464756 kubelet[3530]: E0715 04:41:41.464749 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.465286 kubelet[3530]: E0715 04:41:41.465082 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.465286 kubelet[3530]: W0715 04:41:41.465151 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.465286 kubelet[3530]: E0715 04:41:41.465203 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.466044 kubelet[3530]: E0715 04:41:41.465996 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.466044 kubelet[3530]: W0715 04:41:41.466032 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.466281 kubelet[3530]: E0715 04:41:41.466064 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.518248 kubelet[3530]: E0715 04:41:41.518196 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:41.518248 kubelet[3530]: W0715 04:41:41.518235 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:41.518830 kubelet[3530]: E0715 04:41:41.518268 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:41.654867 containerd[2000]: time="2025-07-15T04:41:41.654500815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dwc8z,Uid:67ca6a4f-f8ba-4255-8a0b-ecf3f8142aca,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3a7aa3ddbcf6158b95f50ba2441c248685066a3173541e5478cd8f14fcdd355\"" Jul 15 04:41:42.790749 kubelet[3530]: E0715 04:41:42.790650 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdwd" podUID="0076db00-c9aa-49c4-be93-9c703fd23cc9" Jul 15 04:41:42.823352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642760228.mount: Deactivated successfully. Jul 15 04:41:43.974752 containerd[2000]: time="2025-07-15T04:41:43.974685755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:43.976675 containerd[2000]: time="2025-07-15T04:41:43.976600763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 15 04:41:43.980313 containerd[2000]: time="2025-07-15T04:41:43.980149943Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:43.990598 containerd[2000]: time="2025-07-15T04:41:43.990512699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:43.996633 containerd[2000]: time="2025-07-15T04:41:43.996534383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.782489538s" Jul 15 04:41:43.996633 containerd[2000]: time="2025-07-15T04:41:43.996617447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 15 04:41:43.998985 containerd[2000]: time="2025-07-15T04:41:43.998909903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 04:41:44.046388 containerd[2000]: time="2025-07-15T04:41:44.046320991Z" level=info msg="CreateContainer within sandbox \"c9cf82e0bb8b18b1e7494d157a1d8c161e0b2b4204f279a45637c9b6a19136c2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 04:41:44.066934 containerd[2000]: time="2025-07-15T04:41:44.063784327Z" level=info msg="Container 4167fd373e9cb676b906cbb440181be78e070f904a4c7ba727d78f11432a6f25: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:44.092203 containerd[2000]: time="2025-07-15T04:41:44.090678187Z" level=info msg="CreateContainer within sandbox \"c9cf82e0bb8b18b1e7494d157a1d8c161e0b2b4204f279a45637c9b6a19136c2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4167fd373e9cb676b906cbb440181be78e070f904a4c7ba727d78f11432a6f25\"" Jul 15 04:41:44.099387 containerd[2000]: time="2025-07-15T04:41:44.099292196Z" level=info msg="StartContainer for \"4167fd373e9cb676b906cbb440181be78e070f904a4c7ba727d78f11432a6f25\"" Jul 15 04:41:44.101321 containerd[2000]: time="2025-07-15T04:41:44.101260520Z" level=info msg="connecting to shim 4167fd373e9cb676b906cbb440181be78e070f904a4c7ba727d78f11432a6f25" address="unix:///run/containerd/s/bb6b86635ea3533e90706cc5f70e0bdb1424fb4cd4e3d163a967b38cb3105eda" protocol=ttrpc version=3 Jul 15 04:41:44.153442 systemd[1]: Started cri-containerd-4167fd373e9cb676b906cbb440181be78e070f904a4c7ba727d78f11432a6f25.scope - libcontainer container 4167fd373e9cb676b906cbb440181be78e070f904a4c7ba727d78f11432a6f25. Jul 15 04:41:44.245759 containerd[2000]: time="2025-07-15T04:41:44.245428196Z" level=info msg="StartContainer for \"4167fd373e9cb676b906cbb440181be78e070f904a4c7ba727d78f11432a6f25\" returns successfully" Jul 15 04:41:44.790811 kubelet[3530]: E0715 04:41:44.790724 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdwd" podUID="0076db00-c9aa-49c4-be93-9c703fd23cc9" Jul 15 04:41:45.110622 kubelet[3530]: I0715 04:41:45.110520 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9bd566bd6-7qx69" podStartSLOduration=2.324800347 podStartE2EDuration="5.110481633s" podCreationTimestamp="2025-07-15 04:41:40 +0000 UTC" firstStartedPulling="2025-07-15 04:41:41.212011625 +0000 UTC m=+28.657028051" lastFinishedPulling="2025-07-15 04:41:43.997692911 +0000 UTC m=+31.442709337" observedRunningTime="2025-07-15 04:41:45.087858668 +0000 UTC m=+32.532875118" watchObservedRunningTime="2025-07-15 04:41:45.110481633 +0000 UTC m=+32.555498071" Jul 15 04:41:45.123690 kubelet[3530]: E0715 04:41:45.122618 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.123690 kubelet[3530]: W0715 04:41:45.123414 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.123690 kubelet[3530]: E0715 04:41:45.123464 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.124275 kubelet[3530]: E0715 04:41:45.124201 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.124364 kubelet[3530]: W0715 04:41:45.124269 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.124490 kubelet[3530]: E0715 04:41:45.124378 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.125398 kubelet[3530]: E0715 04:41:45.125349 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.125398 kubelet[3530]: W0715 04:41:45.125387 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.125398 kubelet[3530]: E0715 04:41:45.125418 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.126436 kubelet[3530]: E0715 04:41:45.126356 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.126436 kubelet[3530]: W0715 04:41:45.126422 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.126648 kubelet[3530]: E0715 04:41:45.126478 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.127696 kubelet[3530]: E0715 04:41:45.127611 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.129146 kubelet[3530]: W0715 04:41:45.128141 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.129146 kubelet[3530]: E0715 04:41:45.128197 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.129345 kubelet[3530]: E0715 04:41:45.129224 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.129918 kubelet[3530]: W0715 04:41:45.129250 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.130051 kubelet[3530]: E0715 04:41:45.129988 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.130601 kubelet[3530]: E0715 04:41:45.130557 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.130601 kubelet[3530]: W0715 04:41:45.130594 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.130764 kubelet[3530]: E0715 04:41:45.130623 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.131545 kubelet[3530]: E0715 04:41:45.131425 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.131545 kubelet[3530]: W0715 04:41:45.131463 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.131545 kubelet[3530]: E0715 04:41:45.131495 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.132967 kubelet[3530]: E0715 04:41:45.132914 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.132967 kubelet[3530]: W0715 04:41:45.132954 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.133191 kubelet[3530]: E0715 04:41:45.132987 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.134072 kubelet[3530]: E0715 04:41:45.134012 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.134072 kubelet[3530]: W0715 04:41:45.134053 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.134376 kubelet[3530]: E0715 04:41:45.134086 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.135388 kubelet[3530]: E0715 04:41:45.135233 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.135388 kubelet[3530]: W0715 04:41:45.135275 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.135388 kubelet[3530]: E0715 04:41:45.135309 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.136744 kubelet[3530]: E0715 04:41:45.136190 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.136744 kubelet[3530]: W0715 04:41:45.136228 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.136744 kubelet[3530]: E0715 04:41:45.136261 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.137638 kubelet[3530]: E0715 04:41:45.137577 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.137638 kubelet[3530]: W0715 04:41:45.137622 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.137925 kubelet[3530]: E0715 04:41:45.137655 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.139201 kubelet[3530]: E0715 04:41:45.139115 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.139201 kubelet[3530]: W0715 04:41:45.139155 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.139201 kubelet[3530]: E0715 04:41:45.139188 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.140205 kubelet[3530]: E0715 04:41:45.140054 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.140205 kubelet[3530]: W0715 04:41:45.140094 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.141384 kubelet[3530]: E0715 04:41:45.141293 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.191954 kubelet[3530]: E0715 04:41:45.191911 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.191954 kubelet[3530]: W0715 04:41:45.191946 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.192272 kubelet[3530]: E0715 04:41:45.191976 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.192557 kubelet[3530]: E0715 04:41:45.192525 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.192709 kubelet[3530]: W0715 04:41:45.192557 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.192709 kubelet[3530]: E0715 04:41:45.192582 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.193355 kubelet[3530]: E0715 04:41:45.193323 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.193632 kubelet[3530]: W0715 04:41:45.193354 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.193632 kubelet[3530]: E0715 04:41:45.193384 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.193961 kubelet[3530]: E0715 04:41:45.193912 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.193961 kubelet[3530]: W0715 04:41:45.193934 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.194087 kubelet[3530]: E0715 04:41:45.193960 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.194466 kubelet[3530]: E0715 04:41:45.194434 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.194548 kubelet[3530]: W0715 04:41:45.194466 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.194548 kubelet[3530]: E0715 04:41:45.194493 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.195051 kubelet[3530]: E0715 04:41:45.195015 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.195051 kubelet[3530]: W0715 04:41:45.195047 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.195303 kubelet[3530]: E0715 04:41:45.195098 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.195674 kubelet[3530]: E0715 04:41:45.195642 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.195753 kubelet[3530]: W0715 04:41:45.195672 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.195753 kubelet[3530]: E0715 04:41:45.195697 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.196181 kubelet[3530]: E0715 04:41:45.196029 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.196181 kubelet[3530]: W0715 04:41:45.196158 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.196330 kubelet[3530]: E0715 04:41:45.196183 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.196602 kubelet[3530]: E0715 04:41:45.196568 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.196602 kubelet[3530]: W0715 04:41:45.196595 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.197051 kubelet[3530]: E0715 04:41:45.196619 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.197581 kubelet[3530]: E0715 04:41:45.197505 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.197581 kubelet[3530]: W0715 04:41:45.197538 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.197581 kubelet[3530]: E0715 04:41:45.197578 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.198178 kubelet[3530]: E0715 04:41:45.198038 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.198178 kubelet[3530]: W0715 04:41:45.198067 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.198396 kubelet[3530]: E0715 04:41:45.198092 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.198959 kubelet[3530]: E0715 04:41:45.198927 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.198959 kubelet[3530]: W0715 04:41:45.198956 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.199354 kubelet[3530]: E0715 04:41:45.198983 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.199580 kubelet[3530]: E0715 04:41:45.199548 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.199658 kubelet[3530]: W0715 04:41:45.199578 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.199658 kubelet[3530]: E0715 04:41:45.199603 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.199958 kubelet[3530]: E0715 04:41:45.199929 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.200055 kubelet[3530]: W0715 04:41:45.199956 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.200055 kubelet[3530]: E0715 04:41:45.199978 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.200446 kubelet[3530]: E0715 04:41:45.200392 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.200446 kubelet[3530]: W0715 04:41:45.200421 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.200446 kubelet[3530]: E0715 04:41:45.200446 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.201063 kubelet[3530]: E0715 04:41:45.201033 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.201063 kubelet[3530]: W0715 04:41:45.201060 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.201334 kubelet[3530]: E0715 04:41:45.201084 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.201615 kubelet[3530]: E0715 04:41:45.201427 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.201615 kubelet[3530]: W0715 04:41:45.201454 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.201615 kubelet[3530]: E0715 04:41:45.201477 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.202432 kubelet[3530]: E0715 04:41:45.202384 3530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 04:41:45.202432 kubelet[3530]: W0715 04:41:45.202416 3530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 04:41:45.202758 kubelet[3530]: E0715 04:41:45.202444 3530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 04:41:45.676334 containerd[2000]: time="2025-07-15T04:41:45.676252919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:45.678400 containerd[2000]: time="2025-07-15T04:41:45.678303611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 15 04:41:45.680951 containerd[2000]: time="2025-07-15T04:41:45.680866067Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:45.685632 containerd[2000]: time="2025-07-15T04:41:45.685493387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:45.687162 containerd[2000]: time="2025-07-15T04:41:45.686873495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.68788252s" Jul 15 04:41:45.687162 containerd[2000]: time="2025-07-15T04:41:45.686942183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 15 04:41:45.696534 containerd[2000]: time="2025-07-15T04:41:45.696468479Z" level=info msg="CreateContainer within sandbox \"f3a7aa3ddbcf6158b95f50ba2441c248685066a3173541e5478cd8f14fcdd355\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 04:41:45.716645 containerd[2000]: time="2025-07-15T04:41:45.716564124Z" level=info msg="Container 16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:45.725338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717325068.mount: Deactivated successfully. Jul 15 04:41:45.742766 containerd[2000]: time="2025-07-15T04:41:45.742571916Z" level=info msg="CreateContainer within sandbox \"f3a7aa3ddbcf6158b95f50ba2441c248685066a3173541e5478cd8f14fcdd355\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf\"" Jul 15 04:41:45.744287 containerd[2000]: time="2025-07-15T04:41:45.744189048Z" level=info msg="StartContainer for \"16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf\"" Jul 15 04:41:45.751293 containerd[2000]: time="2025-07-15T04:41:45.751205784Z" level=info msg="connecting to shim 16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf" address="unix:///run/containerd/s/2baf18e559b59acde58e0452a216d234829ebd4ea747ab0f97d7aec92c530bd6" protocol=ttrpc version=3 Jul 15 04:41:45.797476 systemd[1]: Started cri-containerd-16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf.scope - libcontainer container 16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf. Jul 15 04:41:45.883176 containerd[2000]: time="2025-07-15T04:41:45.882960828Z" level=info msg="StartContainer for \"16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf\" returns successfully" Jul 15 04:41:45.916747 systemd[1]: cri-containerd-16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf.scope: Deactivated successfully. Jul 15 04:41:45.926914 containerd[2000]: time="2025-07-15T04:41:45.926688505Z" level=info msg="received exit event container_id:\"16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf\" id:\"16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf\" pid:4209 exited_at:{seconds:1752554505 nanos:926125405}" Jul 15 04:41:45.927086 containerd[2000]: time="2025-07-15T04:41:45.927034693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf\" id:\"16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf\" pid:4209 exited_at:{seconds:1752554505 nanos:926125405}" Jul 15 04:41:45.971024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16a6a8b215a471dd4273db965ef0e17c5ce44d80683d992bc1ee62ba507b4ecf-rootfs.mount: Deactivated successfully. Jul 15 04:41:46.791343 kubelet[3530]: E0715 04:41:46.789571 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdwd" podUID="0076db00-c9aa-49c4-be93-9c703fd23cc9" Jul 15 04:41:47.081135 containerd[2000]: time="2025-07-15T04:41:47.080560366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 04:41:48.790823 kubelet[3530]: E0715 04:41:48.790237 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdwd" podUID="0076db00-c9aa-49c4-be93-9c703fd23cc9" Jul 15 04:41:50.166405 containerd[2000]: time="2025-07-15T04:41:50.166321202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:50.168064 containerd[2000]: time="2025-07-15T04:41:50.167983850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 15 04:41:50.169375 containerd[2000]: time="2025-07-15T04:41:50.169303094Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:50.172819 containerd[2000]: time="2025-07-15T04:41:50.172691006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:50.174280 containerd[2000]: time="2025-07-15T04:41:50.174080258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.093460132s" Jul 15 04:41:50.174280 containerd[2000]: time="2025-07-15T04:41:50.174154202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 15 04:41:50.181190 containerd[2000]: time="2025-07-15T04:41:50.181131110Z" level=info msg="CreateContainer within sandbox \"f3a7aa3ddbcf6158b95f50ba2441c248685066a3173541e5478cd8f14fcdd355\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 04:41:50.196132 containerd[2000]: time="2025-07-15T04:41:50.195922022Z" level=info msg="Container 78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:50.216061 containerd[2000]: time="2025-07-15T04:41:50.216008498Z" level=info msg="CreateContainer within sandbox \"f3a7aa3ddbcf6158b95f50ba2441c248685066a3173541e5478cd8f14fcdd355\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1\"" Jul 15 04:41:50.218081 containerd[2000]: time="2025-07-15T04:41:50.217857098Z" level=info msg="StartContainer for \"78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1\"" Jul 15 04:41:50.222730 containerd[2000]: time="2025-07-15T04:41:50.222619634Z" level=info msg="connecting to shim 78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1" address="unix:///run/containerd/s/2baf18e559b59acde58e0452a216d234829ebd4ea747ab0f97d7aec92c530bd6" protocol=ttrpc version=3 Jul 15 04:41:50.266438 systemd[1]: Started cri-containerd-78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1.scope - libcontainer container 78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1. Jul 15 04:41:50.354256 containerd[2000]: time="2025-07-15T04:41:50.354168039Z" level=info msg="StartContainer for \"78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1\" returns successfully" Jul 15 04:41:50.790080 kubelet[3530]: E0715 04:41:50.789585 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghdwd" podUID="0076db00-c9aa-49c4-be93-9c703fd23cc9" Jul 15 04:41:51.317745 containerd[2000]: time="2025-07-15T04:41:51.317673255Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:41:51.322930 systemd[1]: cri-containerd-78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1.scope: Deactivated successfully. Jul 15 04:41:51.323808 systemd[1]: cri-containerd-78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1.scope: Consumed 966ms CPU time, 188.4M memory peak, 165.8M written to disk. Jul 15 04:41:51.327688 containerd[2000]: time="2025-07-15T04:41:51.327607059Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1\" id:\"78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1\" pid:4269 exited_at:{seconds:1752554511 nanos:326742879}" Jul 15 04:41:51.328334 containerd[2000]: time="2025-07-15T04:41:51.328268319Z" level=info msg="received exit event container_id:\"78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1\" id:\"78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1\" pid:4269 exited_at:{seconds:1752554511 nanos:326742879}" Jul 15 04:41:51.370731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78319f7348f31ff6519861db4c496be83902f7d2bd47bf0aca2ad6bf689672d1-rootfs.mount: Deactivated successfully. Jul 15 04:41:51.415502 kubelet[3530]: I0715 04:41:51.415441 3530 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 04:41:51.565732 systemd[1]: Created slice kubepods-besteffort-pod7237659f_93f3_4ba9_a648_0803e61989e8.slice - libcontainer container kubepods-besteffort-pod7237659f_93f3_4ba9_a648_0803e61989e8.slice. Jul 15 04:41:51.610045 systemd[1]: Created slice kubepods-burstable-podc0811bc9_e9ed_4d4f_82dc_a09bf600e91e.slice - libcontainer container kubepods-burstable-podc0811bc9_e9ed_4d4f_82dc_a09bf600e91e.slice. Jul 15 04:41:51.646949 kubelet[3530]: I0715 04:41:51.646874 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7237659f-93f3-4ba9-a648-0803e61989e8-whisker-backend-key-pair\") pod \"whisker-6554c47cb8-ntsgh\" (UID: \"7237659f-93f3-4ba9-a648-0803e61989e8\") " pod="calico-system/whisker-6554c47cb8-ntsgh" Jul 15 04:41:51.646949 kubelet[3530]: I0715 04:41:51.646953 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7237659f-93f3-4ba9-a648-0803e61989e8-whisker-ca-bundle\") pod \"whisker-6554c47cb8-ntsgh\" (UID: \"7237659f-93f3-4ba9-a648-0803e61989e8\") " pod="calico-system/whisker-6554c47cb8-ntsgh" Jul 15 04:41:51.648000 kubelet[3530]: I0715 04:41:51.647010 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57rj2\" (UniqueName: \"kubernetes.io/projected/7237659f-93f3-4ba9-a648-0803e61989e8-kube-api-access-57rj2\") pod \"whisker-6554c47cb8-ntsgh\" (UID: \"7237659f-93f3-4ba9-a648-0803e61989e8\") " pod="calico-system/whisker-6554c47cb8-ntsgh" Jul 15 04:41:51.678273 systemd[1]: Created slice kubepods-burstable-podbf77bfae_98bc_4de2_a9a7_e16472917425.slice - libcontainer container kubepods-burstable-podbf77bfae_98bc_4de2_a9a7_e16472917425.slice. Jul 15 04:41:51.747795 kubelet[3530]: I0715 04:41:51.747722 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf77bfae-98bc-4de2-a9a7-e16472917425-config-volume\") pod \"coredns-674b8bbfcf-s98jm\" (UID: \"bf77bfae-98bc-4de2-a9a7-e16472917425\") " pod="kube-system/coredns-674b8bbfcf-s98jm" Jul 15 04:41:51.748055 kubelet[3530]: I0715 04:41:51.748025 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0811bc9-e9ed-4d4f-82dc-a09bf600e91e-config-volume\") pod \"coredns-674b8bbfcf-8xskh\" (UID: \"c0811bc9-e9ed-4d4f-82dc-a09bf600e91e\") " pod="kube-system/coredns-674b8bbfcf-8xskh" Jul 15 04:41:51.748329 kubelet[3530]: I0715 04:41:51.748297 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qsg5\" (UniqueName: \"kubernetes.io/projected/bf77bfae-98bc-4de2-a9a7-e16472917425-kube-api-access-4qsg5\") pod \"coredns-674b8bbfcf-s98jm\" (UID: \"bf77bfae-98bc-4de2-a9a7-e16472917425\") " pod="kube-system/coredns-674b8bbfcf-s98jm" Jul 15 04:41:51.748547 kubelet[3530]: I0715 04:41:51.748518 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmfrz\" (UniqueName: \"kubernetes.io/projected/c0811bc9-e9ed-4d4f-82dc-a09bf600e91e-kube-api-access-hmfrz\") pod \"coredns-674b8bbfcf-8xskh\" (UID: \"c0811bc9-e9ed-4d4f-82dc-a09bf600e91e\") " pod="kube-system/coredns-674b8bbfcf-8xskh" Jul 15 04:41:51.817723 systemd[1]: Created slice kubepods-besteffort-podb5b59031_a976_4061_a747_bcf288f53e7c.slice - libcontainer container kubepods-besteffort-podb5b59031_a976_4061_a747_bcf288f53e7c.slice. Jul 15 04:41:51.848955 kubelet[3530]: I0715 04:41:51.848901 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b5b59031-a976-4061-a747-bcf288f53e7c-calico-apiserver-certs\") pod \"calico-apiserver-85ff754f6c-6pz6d\" (UID: \"b5b59031-a976-4061-a747-bcf288f53e7c\") " pod="calico-apiserver/calico-apiserver-85ff754f6c-6pz6d" Jul 15 04:41:51.851601 kubelet[3530]: I0715 04:41:51.850389 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4bfd\" (UniqueName: \"kubernetes.io/projected/b5b59031-a976-4061-a747-bcf288f53e7c-kube-api-access-b4bfd\") pod \"calico-apiserver-85ff754f6c-6pz6d\" (UID: \"b5b59031-a976-4061-a747-bcf288f53e7c\") " pod="calico-apiserver/calico-apiserver-85ff754f6c-6pz6d" Jul 15 04:41:51.870639 systemd[1]: Created slice kubepods-besteffort-pod6c8bdd8a_b1aa_4a22_8cf7_cec4e017dc04.slice - libcontainer container kubepods-besteffort-pod6c8bdd8a_b1aa_4a22_8cf7_cec4e017dc04.slice. Jul 15 04:41:51.891032 containerd[2000]: time="2025-07-15T04:41:51.889913622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6554c47cb8-ntsgh,Uid:7237659f-93f3-4ba9-a648-0803e61989e8,Namespace:calico-system,Attempt:0,}" Jul 15 04:41:51.941856 containerd[2000]: time="2025-07-15T04:41:51.941796066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8xskh,Uid:c0811bc9-e9ed-4d4f-82dc-a09bf600e91e,Namespace:kube-system,Attempt:0,}" Jul 15 04:41:51.951906 kubelet[3530]: I0715 04:41:51.951413 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04-goldmane-key-pair\") pod \"goldmane-768f4c5c69-k6tnv\" (UID: \"6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04\") " pod="calico-system/goldmane-768f4c5c69-k6tnv" Jul 15 04:41:51.951906 kubelet[3530]: I0715 04:41:51.951516 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04-config\") pod \"goldmane-768f4c5c69-k6tnv\" (UID: \"6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04\") " pod="calico-system/goldmane-768f4c5c69-k6tnv" Jul 15 04:41:51.951906 kubelet[3530]: I0715 04:41:51.951558 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-k6tnv\" (UID: \"6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04\") " pod="calico-system/goldmane-768f4c5c69-k6tnv" Jul 15 04:41:51.952257 kubelet[3530]: I0715 04:41:51.951978 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85nx\" (UniqueName: \"kubernetes.io/projected/6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04-kube-api-access-b85nx\") pod \"goldmane-768f4c5c69-k6tnv\" (UID: \"6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04\") " pod="calico-system/goldmane-768f4c5c69-k6tnv" Jul 15 04:41:51.989427 systemd[1]: Created slice kubepods-besteffort-pod715ffe93_1622_49a4_af2b_e1704f489781.slice - libcontainer container kubepods-besteffort-pod715ffe93_1622_49a4_af2b_e1704f489781.slice. Jul 15 04:41:52.015229 systemd[1]: Created slice kubepods-besteffort-pod0076db00_c9aa_49c4_be93_9c703fd23cc9.slice - libcontainer container kubepods-besteffort-pod0076db00_c9aa_49c4_be93_9c703fd23cc9.slice. Jul 15 04:41:52.026823 containerd[2000]: time="2025-07-15T04:41:52.026768475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s98jm,Uid:bf77bfae-98bc-4de2-a9a7-e16472917425,Namespace:kube-system,Attempt:0,}" Jul 15 04:41:52.043846 containerd[2000]: time="2025-07-15T04:41:52.040608267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdwd,Uid:0076db00-c9aa-49c4-be93-9c703fd23cc9,Namespace:calico-system,Attempt:0,}" Jul 15 04:41:52.046007 systemd[1]: Created slice kubepods-besteffort-podeef457d8_766f_4e1c_ac69_dfbf58c54fe2.slice - libcontainer container kubepods-besteffort-podeef457d8_766f_4e1c_ac69_dfbf58c54fe2.slice. Jul 15 04:41:52.053222 kubelet[3530]: I0715 04:41:52.053175 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/715ffe93-1622-49a4-af2b-e1704f489781-tigera-ca-bundle\") pod \"calico-kube-controllers-58b5d5d888-64j2b\" (UID: \"715ffe93-1622-49a4-af2b-e1704f489781\") " pod="calico-system/calico-kube-controllers-58b5d5d888-64j2b" Jul 15 04:41:52.055961 kubelet[3530]: I0715 04:41:52.055722 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztfst\" (UniqueName: \"kubernetes.io/projected/715ffe93-1622-49a4-af2b-e1704f489781-kube-api-access-ztfst\") pod \"calico-kube-controllers-58b5d5d888-64j2b\" (UID: \"715ffe93-1622-49a4-af2b-e1704f489781\") " pod="calico-system/calico-kube-controllers-58b5d5d888-64j2b" Jul 15 04:41:52.056150 kubelet[3530]: I0715 04:41:52.055856 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eef457d8-766f-4e1c-ac69-dfbf58c54fe2-calico-apiserver-certs\") pod \"calico-apiserver-85ff754f6c-68t68\" (UID: \"eef457d8-766f-4e1c-ac69-dfbf58c54fe2\") " pod="calico-apiserver/calico-apiserver-85ff754f6c-68t68" Jul 15 04:41:52.056150 kubelet[3530]: I0715 04:41:52.056059 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96b69\" (UniqueName: \"kubernetes.io/projected/eef457d8-766f-4e1c-ac69-dfbf58c54fe2-kube-api-access-96b69\") pod \"calico-apiserver-85ff754f6c-68t68\" (UID: \"eef457d8-766f-4e1c-ac69-dfbf58c54fe2\") " pod="calico-apiserver/calico-apiserver-85ff754f6c-68t68" Jul 15 04:41:52.135183 containerd[2000]: time="2025-07-15T04:41:52.133999851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ff754f6c-6pz6d,Uid:b5b59031-a976-4061-a747-bcf288f53e7c,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:41:52.150730 containerd[2000]: time="2025-07-15T04:41:52.150678820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 04:41:52.202206 containerd[2000]: time="2025-07-15T04:41:52.202008856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-k6tnv,Uid:6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04,Namespace:calico-system,Attempt:0,}" Jul 15 04:41:52.325554 containerd[2000]: time="2025-07-15T04:41:52.325423780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58b5d5d888-64j2b,Uid:715ffe93-1622-49a4-af2b-e1704f489781,Namespace:calico-system,Attempt:0,}" Jul 15 04:41:52.376151 containerd[2000]: time="2025-07-15T04:41:52.375874157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ff754f6c-68t68,Uid:eef457d8-766f-4e1c-ac69-dfbf58c54fe2,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:41:52.536581 containerd[2000]: time="2025-07-15T04:41:52.535943777Z" level=error msg="Failed to destroy network for sandbox \"8e7253aad467a6514b0de598f0153e38d5be9a9d8c469c963b686df7e136baa7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.545764 systemd[1]: run-netns-cni\x2debd28fb0\x2d20ba\x2dbf4c\x2d4b33\x2d62aeb33de5a5.mount: Deactivated successfully. Jul 15 04:41:52.556003 containerd[2000]: time="2025-07-15T04:41:52.555739518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8xskh,Uid:c0811bc9-e9ed-4d4f-82dc-a09bf600e91e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e7253aad467a6514b0de598f0153e38d5be9a9d8c469c963b686df7e136baa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.556301 kubelet[3530]: E0715 04:41:52.556163 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e7253aad467a6514b0de598f0153e38d5be9a9d8c469c963b686df7e136baa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.556301 kubelet[3530]: E0715 04:41:52.556257 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e7253aad467a6514b0de598f0153e38d5be9a9d8c469c963b686df7e136baa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8xskh" Jul 15 04:41:52.556301 kubelet[3530]: E0715 04:41:52.556291 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e7253aad467a6514b0de598f0153e38d5be9a9d8c469c963b686df7e136baa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8xskh" Jul 15 04:41:52.558275 kubelet[3530]: E0715 04:41:52.556377 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8xskh_kube-system(c0811bc9-e9ed-4d4f-82dc-a09bf600e91e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8xskh_kube-system(c0811bc9-e9ed-4d4f-82dc-a09bf600e91e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e7253aad467a6514b0de598f0153e38d5be9a9d8c469c963b686df7e136baa7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8xskh" podUID="c0811bc9-e9ed-4d4f-82dc-a09bf600e91e" Jul 15 04:41:52.589673 containerd[2000]: time="2025-07-15T04:41:52.589433562Z" level=error msg="Failed to destroy network for sandbox \"c956d8aa62b69f3c498cb5edf66a998a5efa2500a096d4993a9f91573a1a57cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.596978 systemd[1]: run-netns-cni\x2d3af9eb24\x2da5e9\x2dc490\x2ddc41\x2d78965cdaeb8a.mount: Deactivated successfully. Jul 15 04:41:52.600704 containerd[2000]: time="2025-07-15T04:41:52.599458098Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6554c47cb8-ntsgh,Uid:7237659f-93f3-4ba9-a648-0803e61989e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c956d8aa62b69f3c498cb5edf66a998a5efa2500a096d4993a9f91573a1a57cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.602934 kubelet[3530]: E0715 04:41:52.601277 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c956d8aa62b69f3c498cb5edf66a998a5efa2500a096d4993a9f91573a1a57cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.602934 kubelet[3530]: E0715 04:41:52.601357 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c956d8aa62b69f3c498cb5edf66a998a5efa2500a096d4993a9f91573a1a57cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6554c47cb8-ntsgh" Jul 15 04:41:52.602934 kubelet[3530]: E0715 04:41:52.601400 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c956d8aa62b69f3c498cb5edf66a998a5efa2500a096d4993a9f91573a1a57cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6554c47cb8-ntsgh" Jul 15 04:41:52.603325 kubelet[3530]: E0715 04:41:52.601473 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6554c47cb8-ntsgh_calico-system(7237659f-93f3-4ba9-a648-0803e61989e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6554c47cb8-ntsgh_calico-system(7237659f-93f3-4ba9-a648-0803e61989e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c956d8aa62b69f3c498cb5edf66a998a5efa2500a096d4993a9f91573a1a57cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6554c47cb8-ntsgh" podUID="7237659f-93f3-4ba9-a648-0803e61989e8" Jul 15 04:41:52.617216 containerd[2000]: time="2025-07-15T04:41:52.616949130Z" level=error msg="Failed to destroy network for sandbox \"e6a265097a7687bf36b92233f84cc41460446e0a6e452e08b0e179e71290c44a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.620151 containerd[2000]: time="2025-07-15T04:41:52.617737866Z" level=error msg="Failed to destroy network for sandbox \"67bb261c5c4149312628ab6105ad0f610d7bac69874136dc94b3a396690c249d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.623365 containerd[2000]: time="2025-07-15T04:41:52.623199294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s98jm,Uid:bf77bfae-98bc-4de2-a9a7-e16472917425,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a265097a7687bf36b92233f84cc41460446e0a6e452e08b0e179e71290c44a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.625958 systemd[1]: run-netns-cni\x2db629978f\x2d4c09\x2df9a5\x2d483c\x2d1c2a5b263948.mount: Deactivated successfully. Jul 15 04:41:52.629676 containerd[2000]: time="2025-07-15T04:41:52.626558310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdwd,Uid:0076db00-c9aa-49c4-be93-9c703fd23cc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67bb261c5c4149312628ab6105ad0f610d7bac69874136dc94b3a396690c249d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.630376 kubelet[3530]: E0715 04:41:52.630309 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a265097a7687bf36b92233f84cc41460446e0a6e452e08b0e179e71290c44a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.630504 kubelet[3530]: E0715 04:41:52.630399 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a265097a7687bf36b92233f84cc41460446e0a6e452e08b0e179e71290c44a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-s98jm" Jul 15 04:41:52.630504 kubelet[3530]: E0715 04:41:52.630437 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a265097a7687bf36b92233f84cc41460446e0a6e452e08b0e179e71290c44a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-s98jm" Jul 15 04:41:52.630637 kubelet[3530]: E0715 04:41:52.630513 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-s98jm_kube-system(bf77bfae-98bc-4de2-a9a7-e16472917425)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-s98jm_kube-system(bf77bfae-98bc-4de2-a9a7-e16472917425)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6a265097a7687bf36b92233f84cc41460446e0a6e452e08b0e179e71290c44a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-s98jm" podUID="bf77bfae-98bc-4de2-a9a7-e16472917425" Jul 15 04:41:52.631230 kubelet[3530]: E0715 04:41:52.630309 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67bb261c5c4149312628ab6105ad0f610d7bac69874136dc94b3a396690c249d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.631435 kubelet[3530]: E0715 04:41:52.631400 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67bb261c5c4149312628ab6105ad0f610d7bac69874136dc94b3a396690c249d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghdwd" Jul 15 04:41:52.631712 kubelet[3530]: E0715 04:41:52.631557 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67bb261c5c4149312628ab6105ad0f610d7bac69874136dc94b3a396690c249d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghdwd" Jul 15 04:41:52.631712 kubelet[3530]: E0715 04:41:52.631641 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghdwd_calico-system(0076db00-c9aa-49c4-be93-9c703fd23cc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghdwd_calico-system(0076db00-c9aa-49c4-be93-9c703fd23cc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67bb261c5c4149312628ab6105ad0f610d7bac69874136dc94b3a396690c249d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghdwd" podUID="0076db00-c9aa-49c4-be93-9c703fd23cc9" Jul 15 04:41:52.698236 containerd[2000]: time="2025-07-15T04:41:52.698095806Z" level=error msg="Failed to destroy network for sandbox \"550278db3dca8e60a9052249c326977b529503fa35d81a58a115013dfe975198\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.703124 containerd[2000]: time="2025-07-15T04:41:52.702778098Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ff754f6c-6pz6d,Uid:b5b59031-a976-4061-a747-bcf288f53e7c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"550278db3dca8e60a9052249c326977b529503fa35d81a58a115013dfe975198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.703786 kubelet[3530]: E0715 04:41:52.703725 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"550278db3dca8e60a9052249c326977b529503fa35d81a58a115013dfe975198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.704254 kubelet[3530]: E0715 04:41:52.703809 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"550278db3dca8e60a9052249c326977b529503fa35d81a58a115013dfe975198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85ff754f6c-6pz6d" Jul 15 04:41:52.704254 kubelet[3530]: E0715 04:41:52.703850 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"550278db3dca8e60a9052249c326977b529503fa35d81a58a115013dfe975198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85ff754f6c-6pz6d" Jul 15 04:41:52.704254 kubelet[3530]: E0715 04:41:52.703939 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85ff754f6c-6pz6d_calico-apiserver(b5b59031-a976-4061-a747-bcf288f53e7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85ff754f6c-6pz6d_calico-apiserver(b5b59031-a976-4061-a747-bcf288f53e7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"550278db3dca8e60a9052249c326977b529503fa35d81a58a115013dfe975198\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85ff754f6c-6pz6d" podUID="b5b59031-a976-4061-a747-bcf288f53e7c" Jul 15 04:41:52.724349 containerd[2000]: time="2025-07-15T04:41:52.724097226Z" level=error msg="Failed to destroy network for sandbox \"5dfa9c641dd1c8216ddf07cb6ea09594803d492d965d4f4ec8df9c0c0d6c38f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.725503 containerd[2000]: time="2025-07-15T04:41:52.725433330Z" level=error msg="Failed to destroy network for sandbox \"c297a2cfa8ceb7c4f41d2236a39bbeab1474824c3b1305530a4a952dcaee2b9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.727798 containerd[2000]: time="2025-07-15T04:41:52.727733370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-k6tnv,Uid:6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfa9c641dd1c8216ddf07cb6ea09594803d492d965d4f4ec8df9c0c0d6c38f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.728447 kubelet[3530]: E0715 04:41:52.728393 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfa9c641dd1c8216ddf07cb6ea09594803d492d965d4f4ec8df9c0c0d6c38f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.728705 kubelet[3530]: E0715 04:41:52.728640 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfa9c641dd1c8216ddf07cb6ea09594803d492d965d4f4ec8df9c0c0d6c38f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-k6tnv" Jul 15 04:41:52.728912 kubelet[3530]: E0715 04:41:52.728811 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfa9c641dd1c8216ddf07cb6ea09594803d492d965d4f4ec8df9c0c0d6c38f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-k6tnv" Jul 15 04:41:52.730243 containerd[2000]: time="2025-07-15T04:41:52.729730362Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58b5d5d888-64j2b,Uid:715ffe93-1622-49a4-af2b-e1704f489781,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c297a2cfa8ceb7c4f41d2236a39bbeab1474824c3b1305530a4a952dcaee2b9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.730587 kubelet[3530]: E0715 04:41:52.730478 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-k6tnv_calico-system(6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-k6tnv_calico-system(6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dfa9c641dd1c8216ddf07cb6ea09594803d492d965d4f4ec8df9c0c0d6c38f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-k6tnv" podUID="6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04" Jul 15 04:41:52.731020 kubelet[3530]: E0715 04:41:52.730906 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c297a2cfa8ceb7c4f41d2236a39bbeab1474824c3b1305530a4a952dcaee2b9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.731318 kubelet[3530]: E0715 04:41:52.731164 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c297a2cfa8ceb7c4f41d2236a39bbeab1474824c3b1305530a4a952dcaee2b9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58b5d5d888-64j2b" Jul 15 04:41:52.731603 kubelet[3530]: E0715 04:41:52.731205 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c297a2cfa8ceb7c4f41d2236a39bbeab1474824c3b1305530a4a952dcaee2b9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58b5d5d888-64j2b" Jul 15 04:41:52.731603 kubelet[3530]: E0715 04:41:52.731541 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58b5d5d888-64j2b_calico-system(715ffe93-1622-49a4-af2b-e1704f489781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58b5d5d888-64j2b_calico-system(715ffe93-1622-49a4-af2b-e1704f489781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c297a2cfa8ceb7c4f41d2236a39bbeab1474824c3b1305530a4a952dcaee2b9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58b5d5d888-64j2b" podUID="715ffe93-1622-49a4-af2b-e1704f489781" Jul 15 04:41:52.757463 containerd[2000]: time="2025-07-15T04:41:52.757361899Z" level=error msg="Failed to destroy network for sandbox \"60b4421bc443dd7661d57de76c07ae3c7f96fe66b730048f6d36a72469fa50fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.759800 containerd[2000]: time="2025-07-15T04:41:52.759728731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ff754f6c-68t68,Uid:eef457d8-766f-4e1c-ac69-dfbf58c54fe2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60b4421bc443dd7661d57de76c07ae3c7f96fe66b730048f6d36a72469fa50fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.760090 kubelet[3530]: E0715 04:41:52.760031 3530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60b4421bc443dd7661d57de76c07ae3c7f96fe66b730048f6d36a72469fa50fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 04:41:52.760223 kubelet[3530]: E0715 04:41:52.760139 3530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60b4421bc443dd7661d57de76c07ae3c7f96fe66b730048f6d36a72469fa50fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85ff754f6c-68t68" Jul 15 04:41:52.760223 kubelet[3530]: E0715 04:41:52.760178 3530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60b4421bc443dd7661d57de76c07ae3c7f96fe66b730048f6d36a72469fa50fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85ff754f6c-68t68" Jul 15 04:41:52.760347 kubelet[3530]: E0715 04:41:52.760254 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85ff754f6c-68t68_calico-apiserver(eef457d8-766f-4e1c-ac69-dfbf58c54fe2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85ff754f6c-68t68_calico-apiserver(eef457d8-766f-4e1c-ac69-dfbf58c54fe2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60b4421bc443dd7661d57de76c07ae3c7f96fe66b730048f6d36a72469fa50fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85ff754f6c-68t68" podUID="eef457d8-766f-4e1c-ac69-dfbf58c54fe2" Jul 15 04:41:53.369438 systemd[1]: run-netns-cni\x2d9eda61f2\x2d3f22\x2d0e92\x2d51c9\x2d7174a8c3bd64.mount: Deactivated successfully. Jul 15 04:41:53.369644 systemd[1]: run-netns-cni\x2d5c9811f0\x2d4733\x2d1f2f\x2d555e\x2da32ab0bf862a.mount: Deactivated successfully. Jul 15 04:41:53.369773 systemd[1]: run-netns-cni\x2db4be6d46\x2d55c5\x2d5924\x2d4bec\x2d8136cc0dcb03.mount: Deactivated successfully. Jul 15 04:41:53.369899 systemd[1]: run-netns-cni\x2dfb44fbc8\x2df18b\x2d45c3\x2d87b3\x2d306fe103a8a2.mount: Deactivated successfully. Jul 15 04:41:53.370029 systemd[1]: run-netns-cni\x2da11e5762\x2dfd2b\x2d5a1e\x2d6c8e\x2ddf44b1bf1069.mount: Deactivated successfully. Jul 15 04:41:58.587594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2953980433.mount: Deactivated successfully. Jul 15 04:41:58.656807 containerd[2000]: time="2025-07-15T04:41:58.656610120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:58.659588 containerd[2000]: time="2025-07-15T04:41:58.659481840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 15 04:41:58.662143 containerd[2000]: time="2025-07-15T04:41:58.662063400Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:58.671442 containerd[2000]: time="2025-07-15T04:41:58.671345856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:41:58.674769 containerd[2000]: time="2025-07-15T04:41:58.673826124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.520530344s" Jul 15 04:41:58.674769 containerd[2000]: time="2025-07-15T04:41:58.673907040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 15 04:41:58.719942 containerd[2000]: time="2025-07-15T04:41:58.719862708Z" level=info msg="CreateContainer within sandbox \"f3a7aa3ddbcf6158b95f50ba2441c248685066a3173541e5478cd8f14fcdd355\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 04:41:58.743133 containerd[2000]: time="2025-07-15T04:41:58.740402832Z" level=info msg="Container 5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:41:58.751808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3446992701.mount: Deactivated successfully. Jul 15 04:41:58.770070 containerd[2000]: time="2025-07-15T04:41:58.769985424Z" level=info msg="CreateContainer within sandbox \"f3a7aa3ddbcf6158b95f50ba2441c248685066a3173541e5478cd8f14fcdd355\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea\"" Jul 15 04:41:58.771486 containerd[2000]: time="2025-07-15T04:41:58.771380676Z" level=info msg="StartContainer for \"5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea\"" Jul 15 04:41:58.775434 containerd[2000]: time="2025-07-15T04:41:58.775313112Z" level=info msg="connecting to shim 5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea" address="unix:///run/containerd/s/2baf18e559b59acde58e0452a216d234829ebd4ea747ab0f97d7aec92c530bd6" protocol=ttrpc version=3 Jul 15 04:41:58.816494 systemd[1]: Started cri-containerd-5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea.scope - libcontainer container 5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea. Jul 15 04:41:58.908241 containerd[2000]: time="2025-07-15T04:41:58.908013517Z" level=info msg="StartContainer for \"5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea\" returns successfully" Jul 15 04:41:59.179022 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 04:41:59.180369 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 04:41:59.211138 kubelet[3530]: I0715 04:41:59.209349 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dwc8z" podStartSLOduration=2.19122561 podStartE2EDuration="19.209326763s" podCreationTimestamp="2025-07-15 04:41:40 +0000 UTC" firstStartedPulling="2025-07-15 04:41:41.659077795 +0000 UTC m=+29.104094221" lastFinishedPulling="2025-07-15 04:41:58.677178936 +0000 UTC m=+46.122195374" observedRunningTime="2025-07-15 04:41:59.209326163 +0000 UTC m=+46.654342637" watchObservedRunningTime="2025-07-15 04:41:59.209326763 +0000 UTC m=+46.654343201" Jul 15 04:41:59.532450 kubelet[3530]: I0715 04:41:59.532296 3530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7237659f-93f3-4ba9-a648-0803e61989e8-whisker-ca-bundle\") pod \"7237659f-93f3-4ba9-a648-0803e61989e8\" (UID: \"7237659f-93f3-4ba9-a648-0803e61989e8\") " Jul 15 04:41:59.532450 kubelet[3530]: I0715 04:41:59.532380 3530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7237659f-93f3-4ba9-a648-0803e61989e8-whisker-backend-key-pair\") pod \"7237659f-93f3-4ba9-a648-0803e61989e8\" (UID: \"7237659f-93f3-4ba9-a648-0803e61989e8\") " Jul 15 04:41:59.532450 kubelet[3530]: I0715 04:41:59.532449 3530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57rj2\" (UniqueName: \"kubernetes.io/projected/7237659f-93f3-4ba9-a648-0803e61989e8-kube-api-access-57rj2\") pod \"7237659f-93f3-4ba9-a648-0803e61989e8\" (UID: \"7237659f-93f3-4ba9-a648-0803e61989e8\") " Jul 15 04:41:59.533964 kubelet[3530]: I0715 04:41:59.533538 3530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7237659f-93f3-4ba9-a648-0803e61989e8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7237659f-93f3-4ba9-a648-0803e61989e8" (UID: "7237659f-93f3-4ba9-a648-0803e61989e8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 04:41:59.562458 kubelet[3530]: I0715 04:41:59.562288 3530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7237659f-93f3-4ba9-a648-0803e61989e8-kube-api-access-57rj2" (OuterVolumeSpecName: "kube-api-access-57rj2") pod "7237659f-93f3-4ba9-a648-0803e61989e8" (UID: "7237659f-93f3-4ba9-a648-0803e61989e8"). InnerVolumeSpecName "kube-api-access-57rj2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 04:41:59.563406 kubelet[3530]: I0715 04:41:59.563269 3530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7237659f-93f3-4ba9-a648-0803e61989e8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7237659f-93f3-4ba9-a648-0803e61989e8" (UID: "7237659f-93f3-4ba9-a648-0803e61989e8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 04:41:59.597664 systemd[1]: var-lib-kubelet-pods-7237659f\x2d93f3\x2d4ba9\x2da648\x2d0803e61989e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d57rj2.mount: Deactivated successfully. Jul 15 04:41:59.599384 systemd[1]: var-lib-kubelet-pods-7237659f\x2d93f3\x2d4ba9\x2da648\x2d0803e61989e8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 04:41:59.633673 kubelet[3530]: I0715 04:41:59.633515 3530 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7237659f-93f3-4ba9-a648-0803e61989e8-whisker-ca-bundle\") on node \"ip-172-31-20-207\" DevicePath \"\"" Jul 15 04:41:59.634134 kubelet[3530]: I0715 04:41:59.633887 3530 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7237659f-93f3-4ba9-a648-0803e61989e8-whisker-backend-key-pair\") on node \"ip-172-31-20-207\" DevicePath \"\"" Jul 15 04:41:59.634134 kubelet[3530]: I0715 04:41:59.633935 3530 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-57rj2\" (UniqueName: \"kubernetes.io/projected/7237659f-93f3-4ba9-a648-0803e61989e8-kube-api-access-57rj2\") on node \"ip-172-31-20-207\" DevicePath \"\"" Jul 15 04:42:00.183443 kubelet[3530]: I0715 04:42:00.183379 3530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:42:00.194631 systemd[1]: Removed slice kubepods-besteffort-pod7237659f_93f3_4ba9_a648_0803e61989e8.slice - libcontainer container kubepods-besteffort-pod7237659f_93f3_4ba9_a648_0803e61989e8.slice. Jul 15 04:42:00.321534 systemd[1]: Created slice kubepods-besteffort-podb80f9c0e_0fa9_4e60_8369_9af91bf005b7.slice - libcontainer container kubepods-besteffort-podb80f9c0e_0fa9_4e60_8369_9af91bf005b7.slice. Jul 15 04:42:00.439333 kubelet[3530]: I0715 04:42:00.439152 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b80f9c0e-0fa9-4e60-8369-9af91bf005b7-whisker-backend-key-pair\") pod \"whisker-6dbccc49dd-dj5bx\" (UID: \"b80f9c0e-0fa9-4e60-8369-9af91bf005b7\") " pod="calico-system/whisker-6dbccc49dd-dj5bx" Jul 15 04:42:00.439333 kubelet[3530]: I0715 04:42:00.439245 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b80f9c0e-0fa9-4e60-8369-9af91bf005b7-whisker-ca-bundle\") pod \"whisker-6dbccc49dd-dj5bx\" (UID: \"b80f9c0e-0fa9-4e60-8369-9af91bf005b7\") " pod="calico-system/whisker-6dbccc49dd-dj5bx" Jul 15 04:42:00.439333 kubelet[3530]: I0715 04:42:00.439292 3530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jsz5\" (UniqueName: \"kubernetes.io/projected/b80f9c0e-0fa9-4e60-8369-9af91bf005b7-kube-api-access-5jsz5\") pod \"whisker-6dbccc49dd-dj5bx\" (UID: \"b80f9c0e-0fa9-4e60-8369-9af91bf005b7\") " pod="calico-system/whisker-6dbccc49dd-dj5bx" Jul 15 04:42:00.630142 containerd[2000]: time="2025-07-15T04:42:00.629991578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dbccc49dd-dj5bx,Uid:b80f9c0e-0fa9-4e60-8369-9af91bf005b7,Namespace:calico-system,Attempt:0,}" Jul 15 04:42:00.799743 kubelet[3530]: I0715 04:42:00.799028 3530 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7237659f-93f3-4ba9-a648-0803e61989e8" path="/var/lib/kubelet/pods/7237659f-93f3-4ba9-a648-0803e61989e8/volumes" Jul 15 04:42:01.009094 (udev-worker)[4565]: Network interface NamePolicy= disabled on kernel command line. Jul 15 04:42:01.017375 systemd-networkd[1831]: cali4c3e28317c0: Link UP Jul 15 04:42:01.018838 systemd-networkd[1831]: cali4c3e28317c0: Gained carrier Jul 15 04:42:01.069215 containerd[2000]: 2025-07-15 04:42:00.679 [INFO][4593] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 04:42:01.069215 containerd[2000]: 2025-07-15 04:42:00.783 [INFO][4593] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0 whisker-6dbccc49dd- calico-system b80f9c0e-0fa9-4e60-8369-9af91bf005b7 939 0 2025-07-15 04:42:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6dbccc49dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-20-207 whisker-6dbccc49dd-dj5bx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4c3e28317c0 [] [] }} ContainerID="226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" Namespace="calico-system" Pod="whisker-6dbccc49dd-dj5bx" WorkloadEndpoint="ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-" Jul 15 04:42:01.069215 containerd[2000]: 2025-07-15 04:42:00.783 [INFO][4593] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" Namespace="calico-system" Pod="whisker-6dbccc49dd-dj5bx" WorkloadEndpoint="ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0" Jul 15 04:42:01.069215 containerd[2000]: 2025-07-15 04:42:00.885 [INFO][4604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" HandleID="k8s-pod-network.226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" Workload="ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0" Jul 15 04:42:01.069589 containerd[2000]: 2025-07-15 04:42:00.887 [INFO][4604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" HandleID="k8s-pod-network.226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" Workload="ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bc120), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-207", "pod":"whisker-6dbccc49dd-dj5bx", "timestamp":"2025-07-15 04:42:00.885521163 +0000 UTC"}, Hostname:"ip-172-31-20-207", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:42:01.069589 containerd[2000]: 2025-07-15 04:42:00.887 [INFO][4604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:42:01.069589 containerd[2000]: 2025-07-15 04:42:00.887 [INFO][4604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:42:01.069589 containerd[2000]: 2025-07-15 04:42:00.887 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-207' Jul 15 04:42:01.069589 containerd[2000]: 2025-07-15 04:42:00.912 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" host="ip-172-31-20-207" Jul 15 04:42:01.069589 containerd[2000]: 2025-07-15 04:42:00.926 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-207" Jul 15 04:42:01.069589 containerd[2000]: 2025-07-15 04:42:00.940 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:01.069589 containerd[2000]: 2025-07-15 04:42:00.945 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:01.069589 containerd[2000]: 2025-07-15 04:42:00.951 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:01.070055 containerd[2000]: 2025-07-15 04:42:00.951 [INFO][4604] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.64/26 handle="k8s-pod-network.226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" host="ip-172-31-20-207" Jul 15 04:42:01.070055 containerd[2000]: 2025-07-15 04:42:00.954 [INFO][4604] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4 Jul 15 04:42:01.070055 containerd[2000]: 2025-07-15 04:42:00.970 [INFO][4604] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.64/26 handle="k8s-pod-network.226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" host="ip-172-31-20-207" Jul 15 04:42:01.070055 containerd[2000]: 2025-07-15 04:42:00.982 [INFO][4604] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.65/26] block=192.168.41.64/26 handle="k8s-pod-network.226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" host="ip-172-31-20-207" Jul 15 04:42:01.070055 containerd[2000]: 2025-07-15 04:42:00.982 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.65/26] handle="k8s-pod-network.226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" host="ip-172-31-20-207" Jul 15 04:42:01.070055 containerd[2000]: 2025-07-15 04:42:00.982 [INFO][4604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:42:01.070055 containerd[2000]: 2025-07-15 04:42:00.982 [INFO][4604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.65/26] IPv6=[] ContainerID="226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" HandleID="k8s-pod-network.226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" Workload="ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0" Jul 15 04:42:01.078522 containerd[2000]: 2025-07-15 04:42:00.991 [INFO][4593] cni-plugin/k8s.go 418: Populated endpoint ContainerID="226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" Namespace="calico-system" Pod="whisker-6dbccc49dd-dj5bx" WorkloadEndpoint="ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0", GenerateName:"whisker-6dbccc49dd-", Namespace:"calico-system", SelfLink:"", UID:"b80f9c0e-0fa9-4e60-8369-9af91bf005b7", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 42, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dbccc49dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"", Pod:"whisker-6dbccc49dd-dj5bx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.41.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4c3e28317c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:01.078522 containerd[2000]: 2025-07-15 04:42:00.991 [INFO][4593] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.65/32] ContainerID="226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" Namespace="calico-system" Pod="whisker-6dbccc49dd-dj5bx" WorkloadEndpoint="ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0" Jul 15 04:42:01.079245 containerd[2000]: 2025-07-15 04:42:00.991 [INFO][4593] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c3e28317c0 ContainerID="226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" Namespace="calico-system" Pod="whisker-6dbccc49dd-dj5bx" WorkloadEndpoint="ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0" Jul 15 04:42:01.079245 containerd[2000]: 2025-07-15 04:42:01.018 [INFO][4593] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" Namespace="calico-system" Pod="whisker-6dbccc49dd-dj5bx" WorkloadEndpoint="ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0" Jul 15 04:42:01.079378 containerd[2000]: 2025-07-15 04:42:01.021 [INFO][4593] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" Namespace="calico-system" Pod="whisker-6dbccc49dd-dj5bx" WorkloadEndpoint="ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0", GenerateName:"whisker-6dbccc49dd-", Namespace:"calico-system", SelfLink:"", UID:"b80f9c0e-0fa9-4e60-8369-9af91bf005b7", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 42, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dbccc49dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4", Pod:"whisker-6dbccc49dd-dj5bx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.41.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4c3e28317c0", MAC:"4a:91:33:9e:5c:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:01.079555 containerd[2000]: 2025-07-15 04:42:01.057 [INFO][4593] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" Namespace="calico-system" Pod="whisker-6dbccc49dd-dj5bx" WorkloadEndpoint="ip--172--31--20--207-k8s-whisker--6dbccc49dd--dj5bx-eth0" Jul 15 04:42:01.133473 containerd[2000]: time="2025-07-15T04:42:01.133347456Z" level=info msg="connecting to shim 226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4" address="unix:///run/containerd/s/03644c169ed99a457134d8f6c1b150b267a48da0eb01b2b5c3a4caef4acadef7" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:42:01.234506 systemd[1]: Started cri-containerd-226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4.scope - libcontainer container 226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4. Jul 15 04:42:01.391759 containerd[2000]: time="2025-07-15T04:42:01.390774589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dbccc49dd-dj5bx,Uid:b80f9c0e-0fa9-4e60-8369-9af91bf005b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4\"" Jul 15 04:42:01.397657 containerd[2000]: time="2025-07-15T04:42:01.397515037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 04:42:02.082385 systemd-networkd[1831]: cali4c3e28317c0: Gained IPv6LL Jul 15 04:42:02.226937 (udev-worker)[4567]: Network interface NamePolicy= disabled on kernel command line. Jul 15 04:42:02.239589 systemd-networkd[1831]: vxlan.calico: Link UP Jul 15 04:42:02.239614 systemd-networkd[1831]: vxlan.calico: Gained carrier Jul 15 04:42:02.803033 containerd[2000]: time="2025-07-15T04:42:02.802965748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdwd,Uid:0076db00-c9aa-49c4-be93-9c703fd23cc9,Namespace:calico-system,Attempt:0,}" Jul 15 04:42:03.056817 containerd[2000]: time="2025-07-15T04:42:03.056302322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:03.062903 containerd[2000]: time="2025-07-15T04:42:03.062789978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 15 04:42:03.065924 containerd[2000]: time="2025-07-15T04:42:03.065817986Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:03.077146 containerd[2000]: time="2025-07-15T04:42:03.077020070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:03.081441 containerd[2000]: time="2025-07-15T04:42:03.081357902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.683731049s" Jul 15 04:42:03.081441 containerd[2000]: time="2025-07-15T04:42:03.081434966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 15 04:42:03.115997 containerd[2000]: time="2025-07-15T04:42:03.115840874Z" level=info msg="CreateContainer within sandbox \"226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 04:42:03.143734 containerd[2000]: time="2025-07-15T04:42:03.141574046Z" level=info msg="Container d77b8e850c17bbfadc13155455c9128f4e4acf7d6beb82a3e7f3454627ce6265: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:42:03.153948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2521647891.mount: Deactivated successfully. Jul 15 04:42:03.166391 (udev-worker)[4823]: Network interface NamePolicy= disabled on kernel command line. Jul 15 04:42:03.167588 systemd-networkd[1831]: cali75e3c6ba035: Link UP Jul 15 04:42:03.169125 systemd-networkd[1831]: cali75e3c6ba035: Gained carrier Jul 15 04:42:03.188670 containerd[2000]: time="2025-07-15T04:42:03.187536422Z" level=info msg="CreateContainer within sandbox \"226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d77b8e850c17bbfadc13155455c9128f4e4acf7d6beb82a3e7f3454627ce6265\"" Jul 15 04:42:03.190831 containerd[2000]: time="2025-07-15T04:42:03.190778090Z" level=info msg="StartContainer for \"d77b8e850c17bbfadc13155455c9128f4e4acf7d6beb82a3e7f3454627ce6265\"" Jul 15 04:42:03.198806 containerd[2000]: time="2025-07-15T04:42:03.198035066Z" level=info msg="connecting to shim d77b8e850c17bbfadc13155455c9128f4e4acf7d6beb82a3e7f3454627ce6265" address="unix:///run/containerd/s/03644c169ed99a457134d8f6c1b150b267a48da0eb01b2b5c3a4caef4acadef7" protocol=ttrpc version=3 Jul 15 04:42:03.231728 containerd[2000]: 2025-07-15 04:42:02.962 [INFO][4857] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0 csi-node-driver- calico-system 0076db00-c9aa-49c4-be93-9c703fd23cc9 747 0 2025-07-15 04:41:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-20-207 csi-node-driver-ghdwd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali75e3c6ba035 [] [] }} ContainerID="2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" Namespace="calico-system" Pod="csi-node-driver-ghdwd" WorkloadEndpoint="ip--172--31--20--207-k8s-csi--node--driver--ghdwd-" Jul 15 04:42:03.231728 containerd[2000]: 2025-07-15 04:42:02.962 [INFO][4857] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" Namespace="calico-system" Pod="csi-node-driver-ghdwd" WorkloadEndpoint="ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0" Jul 15 04:42:03.231728 containerd[2000]: 2025-07-15 04:42:03.026 [INFO][4873] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" HandleID="k8s-pod-network.2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" Workload="ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0" Jul 15 04:42:03.232024 containerd[2000]: 2025-07-15 04:42:03.026 [INFO][4873] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" HandleID="k8s-pod-network.2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" Workload="ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103710), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-207", "pod":"csi-node-driver-ghdwd", "timestamp":"2025-07-15 04:42:03.02663051 +0000 UTC"}, Hostname:"ip-172-31-20-207", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:42:03.232024 containerd[2000]: 2025-07-15 04:42:03.027 [INFO][4873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:42:03.232024 containerd[2000]: 2025-07-15 04:42:03.027 [INFO][4873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:42:03.232024 containerd[2000]: 2025-07-15 04:42:03.027 [INFO][4873] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-207' Jul 15 04:42:03.232024 containerd[2000]: 2025-07-15 04:42:03.042 [INFO][4873] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" host="ip-172-31-20-207" Jul 15 04:42:03.232024 containerd[2000]: 2025-07-15 04:42:03.052 [INFO][4873] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-207" Jul 15 04:42:03.232024 containerd[2000]: 2025-07-15 04:42:03.063 [INFO][4873] ipam/ipam.go 511: Trying affinity for 192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:03.232024 containerd[2000]: 2025-07-15 04:42:03.072 [INFO][4873] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:03.232024 containerd[2000]: 2025-07-15 04:42:03.085 [INFO][4873] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:03.232522 containerd[2000]: 2025-07-15 04:42:03.085 [INFO][4873] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.64/26 handle="k8s-pod-network.2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" host="ip-172-31-20-207" Jul 15 04:42:03.232522 containerd[2000]: 2025-07-15 04:42:03.098 [INFO][4873] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a Jul 15 04:42:03.232522 containerd[2000]: 2025-07-15 04:42:03.115 [INFO][4873] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.64/26 handle="k8s-pod-network.2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" host="ip-172-31-20-207" Jul 15 04:42:03.232522 containerd[2000]: 2025-07-15 04:42:03.139 [INFO][4873] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.66/26] block=192.168.41.64/26 handle="k8s-pod-network.2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" host="ip-172-31-20-207" Jul 15 04:42:03.232522 containerd[2000]: 2025-07-15 04:42:03.144 [INFO][4873] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.66/26] handle="k8s-pod-network.2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" host="ip-172-31-20-207" Jul 15 04:42:03.232522 containerd[2000]: 2025-07-15 04:42:03.145 [INFO][4873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:42:03.232522 containerd[2000]: 2025-07-15 04:42:03.146 [INFO][4873] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.66/26] IPv6=[] ContainerID="2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" HandleID="k8s-pod-network.2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" Workload="ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0" Jul 15 04:42:03.232825 containerd[2000]: 2025-07-15 04:42:03.161 [INFO][4857] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" Namespace="calico-system" Pod="csi-node-driver-ghdwd" WorkloadEndpoint="ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0076db00-c9aa-49c4-be93-9c703fd23cc9", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"", Pod:"csi-node-driver-ghdwd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali75e3c6ba035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:03.234471 containerd[2000]: 2025-07-15 04:42:03.161 [INFO][4857] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.66/32] ContainerID="2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" Namespace="calico-system" Pod="csi-node-driver-ghdwd" WorkloadEndpoint="ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0" Jul 15 04:42:03.234471 containerd[2000]: 2025-07-15 04:42:03.161 [INFO][4857] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75e3c6ba035 ContainerID="2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" Namespace="calico-system" Pod="csi-node-driver-ghdwd" WorkloadEndpoint="ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0" Jul 15 04:42:03.234471 containerd[2000]: 2025-07-15 04:42:03.169 [INFO][4857] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" Namespace="calico-system" Pod="csi-node-driver-ghdwd" WorkloadEndpoint="ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0" Jul 15 04:42:03.234887 containerd[2000]: 2025-07-15 04:42:03.169 [INFO][4857] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" Namespace="calico-system" Pod="csi-node-driver-ghdwd" WorkloadEndpoint="ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0076db00-c9aa-49c4-be93-9c703fd23cc9", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a", Pod:"csi-node-driver-ghdwd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali75e3c6ba035", MAC:"16:1a:a0:a6:e2:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:03.236850 containerd[2000]: 2025-07-15 04:42:03.198 [INFO][4857] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" Namespace="calico-system" Pod="csi-node-driver-ghdwd" WorkloadEndpoint="ip--172--31--20--207-k8s-csi--node--driver--ghdwd-eth0" Jul 15 04:42:03.257428 kubelet[3530]: I0715 04:42:03.257009 3530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:42:03.274535 systemd[1]: Started cri-containerd-d77b8e850c17bbfadc13155455c9128f4e4acf7d6beb82a3e7f3454627ce6265.scope - libcontainer container d77b8e850c17bbfadc13155455c9128f4e4acf7d6beb82a3e7f3454627ce6265. Jul 15 04:42:03.323713 containerd[2000]: time="2025-07-15T04:42:03.323526279Z" level=info msg="connecting to shim 2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a" address="unix:///run/containerd/s/cdf644efc7e4e901ff7cc2b5001752ed06c4b83e8cabf466766d3b2ac69a977a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:42:03.445460 systemd[1]: Started cri-containerd-2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a.scope - libcontainer container 2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a. Jul 15 04:42:03.472016 containerd[2000]: time="2025-07-15T04:42:03.471913336Z" level=info msg="StartContainer for \"d77b8e850c17bbfadc13155455c9128f4e4acf7d6beb82a3e7f3454627ce6265\" returns successfully" Jul 15 04:42:03.480440 containerd[2000]: time="2025-07-15T04:42:03.480266092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 04:42:03.555346 systemd-networkd[1831]: vxlan.calico: Gained IPv6LL Jul 15 04:42:03.577967 containerd[2000]: time="2025-07-15T04:42:03.577444204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghdwd,Uid:0076db00-c9aa-49c4-be93-9c703fd23cc9,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a\"" Jul 15 04:42:03.629523 containerd[2000]: time="2025-07-15T04:42:03.629448893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea\" id:\"9ff24e7a9b2814652c6b3ba73f8f6c045033af05c110a97711192a4dfba26c96\" pid:4938 exited_at:{seconds:1752554523 nanos:628972553}" Jul 15 04:42:03.791361 containerd[2000]: time="2025-07-15T04:42:03.791191541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8xskh,Uid:c0811bc9-e9ed-4d4f-82dc-a09bf600e91e,Namespace:kube-system,Attempt:0,}" Jul 15 04:42:03.791932 containerd[2000]: time="2025-07-15T04:42:03.791792777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58b5d5d888-64j2b,Uid:715ffe93-1622-49a4-af2b-e1704f489781,Namespace:calico-system,Attempt:0,}" Jul 15 04:42:03.838875 containerd[2000]: time="2025-07-15T04:42:03.838041498Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea\" id:\"8e2d9525c01f1e62fae15a21521a9afb364ca8eaf3fc7cd2bd56aa8d8e0c0d3c\" pid:5006 exited_at:{seconds:1752554523 nanos:837250302}" Jul 15 04:42:04.177736 systemd-networkd[1831]: cali438d2a84415: Link UP Jul 15 04:42:04.178076 systemd-networkd[1831]: cali438d2a84415: Gained carrier Jul 15 04:42:04.225980 containerd[2000]: 2025-07-15 04:42:03.982 [INFO][5018] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0 coredns-674b8bbfcf- kube-system c0811bc9-e9ed-4d4f-82dc-a09bf600e91e 871 0 2025-07-15 04:41:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-207 coredns-674b8bbfcf-8xskh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali438d2a84415 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xskh" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-" Jul 15 04:42:04.225980 containerd[2000]: 2025-07-15 04:42:03.983 [INFO][5018] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xskh" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0" Jul 15 04:42:04.225980 containerd[2000]: 2025-07-15 04:42:04.063 [INFO][5045] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" HandleID="k8s-pod-network.87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" Workload="ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0" Jul 15 04:42:04.226439 containerd[2000]: 2025-07-15 04:42:04.063 [INFO][5045] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" HandleID="k8s-pod-network.87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" Workload="ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031bb50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-207", "pod":"coredns-674b8bbfcf-8xskh", "timestamp":"2025-07-15 04:42:04.063474687 +0000 UTC"}, Hostname:"ip-172-31-20-207", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:42:04.226439 containerd[2000]: 2025-07-15 04:42:04.063 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:42:04.226439 containerd[2000]: 2025-07-15 04:42:04.064 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:42:04.226439 containerd[2000]: 2025-07-15 04:42:04.065 [INFO][5045] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-207' Jul 15 04:42:04.226439 containerd[2000]: 2025-07-15 04:42:04.100 [INFO][5045] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" host="ip-172-31-20-207" Jul 15 04:42:04.226439 containerd[2000]: 2025-07-15 04:42:04.111 [INFO][5045] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-207" Jul 15 04:42:04.226439 containerd[2000]: 2025-07-15 04:42:04.122 [INFO][5045] ipam/ipam.go 511: Trying affinity for 192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:04.226439 containerd[2000]: 2025-07-15 04:42:04.125 [INFO][5045] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:04.226439 containerd[2000]: 2025-07-15 04:42:04.131 [INFO][5045] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:04.226919 containerd[2000]: 2025-07-15 04:42:04.131 [INFO][5045] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.64/26 handle="k8s-pod-network.87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" host="ip-172-31-20-207" Jul 15 04:42:04.226919 containerd[2000]: 2025-07-15 04:42:04.136 [INFO][5045] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a Jul 15 04:42:04.226919 containerd[2000]: 2025-07-15 04:42:04.153 [INFO][5045] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.64/26 handle="k8s-pod-network.87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" host="ip-172-31-20-207" Jul 15 04:42:04.226919 containerd[2000]: 2025-07-15 04:42:04.163 [INFO][5045] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.67/26] block=192.168.41.64/26 handle="k8s-pod-network.87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" host="ip-172-31-20-207" Jul 15 04:42:04.226919 containerd[2000]: 2025-07-15 04:42:04.163 [INFO][5045] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.67/26] handle="k8s-pod-network.87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" host="ip-172-31-20-207" Jul 15 04:42:04.226919 containerd[2000]: 2025-07-15 04:42:04.164 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:42:04.226919 containerd[2000]: 2025-07-15 04:42:04.164 [INFO][5045] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.67/26] IPv6=[] ContainerID="87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" HandleID="k8s-pod-network.87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" Workload="ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0" Jul 15 04:42:04.228806 containerd[2000]: 2025-07-15 04:42:04.168 [INFO][5018] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xskh" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c0811bc9-e9ed-4d4f-82dc-a09bf600e91e", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"", Pod:"coredns-674b8bbfcf-8xskh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali438d2a84415", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:04.228806 containerd[2000]: 2025-07-15 04:42:04.168 [INFO][5018] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.67/32] ContainerID="87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xskh" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0" Jul 15 04:42:04.228806 containerd[2000]: 2025-07-15 04:42:04.168 [INFO][5018] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali438d2a84415 ContainerID="87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xskh" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0" Jul 15 04:42:04.228806 containerd[2000]: 2025-07-15 04:42:04.174 [INFO][5018] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xskh" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0" Jul 15 04:42:04.228806 containerd[2000]: 2025-07-15 04:42:04.174 [INFO][5018] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xskh" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c0811bc9-e9ed-4d4f-82dc-a09bf600e91e", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a", Pod:"coredns-674b8bbfcf-8xskh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali438d2a84415", MAC:"9a:b7:9e:d3:88:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:04.228806 containerd[2000]: 2025-07-15 04:42:04.209 [INFO][5018] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8xskh" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--8xskh-eth0" Jul 15 04:42:04.315536 containerd[2000]: time="2025-07-15T04:42:04.315471916Z" level=info msg="connecting to shim 87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a" address="unix:///run/containerd/s/caad7221287a10bec516a7305231c3d37bbaa6c35a74d4d976b84d83890e0da9" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:42:04.372584 systemd-networkd[1831]: calid549afa262d: Link UP Jul 15 04:42:04.375832 systemd-networkd[1831]: calid549afa262d: Gained carrier Jul 15 04:42:04.434582 systemd[1]: Started cri-containerd-87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a.scope - libcontainer container 87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a. Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:03.975 [INFO][5031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0 calico-kube-controllers-58b5d5d888- calico-system 715ffe93-1622-49a4-af2b-e1704f489781 875 0 2025-07-15 04:41:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58b5d5d888 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-207 calico-kube-controllers-58b5d5d888-64j2b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid549afa262d [] [] }} ContainerID="1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" Namespace="calico-system" Pod="calico-kube-controllers-58b5d5d888-64j2b" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-" Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:03.976 [INFO][5031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" Namespace="calico-system" Pod="calico-kube-controllers-58b5d5d888-64j2b" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0" Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.080 [INFO][5043] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" HandleID="k8s-pod-network.1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" Workload="ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0" Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.080 [INFO][5043] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" HandleID="k8s-pod-network.1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" Workload="ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004df10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-207", "pod":"calico-kube-controllers-58b5d5d888-64j2b", "timestamp":"2025-07-15 04:42:04.080470383 +0000 UTC"}, Hostname:"ip-172-31-20-207", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.080 [INFO][5043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.164 [INFO][5043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.164 [INFO][5043] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-207' Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.212 [INFO][5043] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" host="ip-172-31-20-207" Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.243 [INFO][5043] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-207" Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.259 [INFO][5043] ipam/ipam.go 511: Trying affinity for 192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.262 [INFO][5043] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.270 [INFO][5043] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.273 [INFO][5043] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.64/26 handle="k8s-pod-network.1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" host="ip-172-31-20-207" Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.285 [INFO][5043] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.299 [INFO][5043] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.64/26 handle="k8s-pod-network.1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" host="ip-172-31-20-207" Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.324 [INFO][5043] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.68/26] block=192.168.41.64/26 handle="k8s-pod-network.1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" host="ip-172-31-20-207" Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.325 [INFO][5043] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.68/26] handle="k8s-pod-network.1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" host="ip-172-31-20-207" Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.326 [INFO][5043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:42:04.444356 containerd[2000]: 2025-07-15 04:42:04.326 [INFO][5043] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.68/26] IPv6=[] ContainerID="1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" HandleID="k8s-pod-network.1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" Workload="ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0" Jul 15 04:42:04.446432 containerd[2000]: 2025-07-15 04:42:04.344 [INFO][5031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" Namespace="calico-system" Pod="calico-kube-controllers-58b5d5d888-64j2b" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0", GenerateName:"calico-kube-controllers-58b5d5d888-", Namespace:"calico-system", SelfLink:"", UID:"715ffe93-1622-49a4-af2b-e1704f489781", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58b5d5d888", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"", Pod:"calico-kube-controllers-58b5d5d888-64j2b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid549afa262d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:04.446432 containerd[2000]: 2025-07-15 04:42:04.349 [INFO][5031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.68/32] ContainerID="1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" Namespace="calico-system" Pod="calico-kube-controllers-58b5d5d888-64j2b" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0" Jul 15 04:42:04.446432 containerd[2000]: 2025-07-15 04:42:04.349 [INFO][5031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid549afa262d ContainerID="1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" Namespace="calico-system" Pod="calico-kube-controllers-58b5d5d888-64j2b" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0" Jul 15 04:42:04.446432 containerd[2000]: 2025-07-15 04:42:04.389 [INFO][5031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" Namespace="calico-system" Pod="calico-kube-controllers-58b5d5d888-64j2b" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0" Jul 15 04:42:04.446432 containerd[2000]: 2025-07-15 04:42:04.394 [INFO][5031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" Namespace="calico-system" Pod="calico-kube-controllers-58b5d5d888-64j2b" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0", GenerateName:"calico-kube-controllers-58b5d5d888-", Namespace:"calico-system", SelfLink:"", UID:"715ffe93-1622-49a4-af2b-e1704f489781", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58b5d5d888", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a", Pod:"calico-kube-controllers-58b5d5d888-64j2b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid549afa262d", MAC:"0a:cb:af:a2:cd:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:04.446432 containerd[2000]: 2025-07-15 04:42:04.426 [INFO][5031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" Namespace="calico-system" Pod="calico-kube-controllers-58b5d5d888-64j2b" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--kube--controllers--58b5d5d888--64j2b-eth0" Jul 15 04:42:04.522123 containerd[2000]: time="2025-07-15T04:42:04.518553221Z" level=info msg="connecting to shim 1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a" address="unix:///run/containerd/s/192c578f0230b153bab2c2ad2c5cd5810db653e424d3cab184bd448fc936684d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:42:04.597633 systemd[1]: Started cri-containerd-1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a.scope - libcontainer container 1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a. Jul 15 04:42:04.613530 containerd[2000]: time="2025-07-15T04:42:04.613478897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8xskh,Uid:c0811bc9-e9ed-4d4f-82dc-a09bf600e91e,Namespace:kube-system,Attempt:0,} returns sandbox id \"87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a\"" Jul 15 04:42:04.624350 containerd[2000]: time="2025-07-15T04:42:04.624294845Z" level=info msg="CreateContainer within sandbox \"87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:42:04.645349 containerd[2000]: time="2025-07-15T04:42:04.645281946Z" level=info msg="Container e171faceb8e7bea3a21244e5e6817e01225a5609f3a0726831a73dbde87abcc8: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:42:04.662175 containerd[2000]: time="2025-07-15T04:42:04.662064606Z" level=info msg="CreateContainer within sandbox \"87696d349d7a1d84bcc3947f0a7d9286a74c8366e32c9134888712480a3ee63a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e171faceb8e7bea3a21244e5e6817e01225a5609f3a0726831a73dbde87abcc8\"" Jul 15 04:42:04.664409 containerd[2000]: time="2025-07-15T04:42:04.664326966Z" level=info msg="StartContainer for \"e171faceb8e7bea3a21244e5e6817e01225a5609f3a0726831a73dbde87abcc8\"" Jul 15 04:42:04.667762 containerd[2000]: time="2025-07-15T04:42:04.667691106Z" level=info msg="connecting to shim e171faceb8e7bea3a21244e5e6817e01225a5609f3a0726831a73dbde87abcc8" address="unix:///run/containerd/s/caad7221287a10bec516a7305231c3d37bbaa6c35a74d4d976b84d83890e0da9" protocol=ttrpc version=3 Jul 15 04:42:04.722504 systemd[1]: Started cri-containerd-e171faceb8e7bea3a21244e5e6817e01225a5609f3a0726831a73dbde87abcc8.scope - libcontainer container e171faceb8e7bea3a21244e5e6817e01225a5609f3a0726831a73dbde87abcc8. Jul 15 04:42:04.784506 containerd[2000]: time="2025-07-15T04:42:04.783964494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58b5d5d888-64j2b,Uid:715ffe93-1622-49a4-af2b-e1704f489781,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a\"" Jul 15 04:42:04.794734 containerd[2000]: time="2025-07-15T04:42:04.794386362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s98jm,Uid:bf77bfae-98bc-4de2-a9a7-e16472917425,Namespace:kube-system,Attempt:0,}" Jul 15 04:42:04.887342 containerd[2000]: time="2025-07-15T04:42:04.887227963Z" level=info msg="StartContainer for \"e171faceb8e7bea3a21244e5e6817e01225a5609f3a0726831a73dbde87abcc8\" returns successfully" Jul 15 04:42:04.962786 systemd-networkd[1831]: cali75e3c6ba035: Gained IPv6LL Jul 15 04:42:05.304367 systemd-networkd[1831]: cali5a1c96c223e: Link UP Jul 15 04:42:05.307544 systemd-networkd[1831]: cali5a1c96c223e: Gained carrier Jul 15 04:42:05.317962 kubelet[3530]: I0715 04:42:05.317512 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8xskh" podStartSLOduration=47.317484245 podStartE2EDuration="47.317484245s" podCreationTimestamp="2025-07-15 04:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:42:05.310072829 +0000 UTC m=+52.755089291" watchObservedRunningTime="2025-07-15 04:42:05.317484245 +0000 UTC m=+52.762500695" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:04.955 [INFO][5182] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0 coredns-674b8bbfcf- kube-system bf77bfae-98bc-4de2-a9a7-e16472917425 872 0 2025-07-15 04:41:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-207 coredns-674b8bbfcf-s98jm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5a1c96c223e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" Namespace="kube-system" Pod="coredns-674b8bbfcf-s98jm" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:04.956 [INFO][5182] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" Namespace="kube-system" Pod="coredns-674b8bbfcf-s98jm" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.106 [INFO][5205] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" HandleID="k8s-pod-network.233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" Workload="ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.106 [INFO][5205] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" HandleID="k8s-pod-network.233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" Workload="ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034fb90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-207", "pod":"coredns-674b8bbfcf-s98jm", "timestamp":"2025-07-15 04:42:05.106391116 +0000 UTC"}, Hostname:"ip-172-31-20-207", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.107 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.107 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.107 [INFO][5205] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-207' Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.152 [INFO][5205] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" host="ip-172-31-20-207" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.169 [INFO][5205] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-207" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.190 [INFO][5205] ipam/ipam.go 511: Trying affinity for 192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.199 [INFO][5205] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.208 [INFO][5205] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.209 [INFO][5205] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.64/26 handle="k8s-pod-network.233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" host="ip-172-31-20-207" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.212 [INFO][5205] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92 Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.222 [INFO][5205] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.64/26 handle="k8s-pod-network.233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" host="ip-172-31-20-207" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.244 [INFO][5205] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.69/26] block=192.168.41.64/26 handle="k8s-pod-network.233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" host="ip-172-31-20-207" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.244 [INFO][5205] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.69/26] handle="k8s-pod-network.233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" host="ip-172-31-20-207" Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.244 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:42:05.387865 containerd[2000]: 2025-07-15 04:42:05.244 [INFO][5205] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.69/26] IPv6=[] ContainerID="233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" HandleID="k8s-pod-network.233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" Workload="ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0" Jul 15 04:42:05.390951 containerd[2000]: 2025-07-15 04:42:05.259 [INFO][5182] cni-plugin/k8s.go 418: Populated endpoint ContainerID="233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" Namespace="kube-system" Pod="coredns-674b8bbfcf-s98jm" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf77bfae-98bc-4de2-a9a7-e16472917425", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"", Pod:"coredns-674b8bbfcf-s98jm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a1c96c223e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:05.390951 containerd[2000]: 2025-07-15 04:42:05.259 [INFO][5182] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.69/32] ContainerID="233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" Namespace="kube-system" Pod="coredns-674b8bbfcf-s98jm" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0" Jul 15 04:42:05.390951 containerd[2000]: 2025-07-15 04:42:05.259 [INFO][5182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a1c96c223e ContainerID="233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" Namespace="kube-system" Pod="coredns-674b8bbfcf-s98jm" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0" Jul 15 04:42:05.390951 containerd[2000]: 2025-07-15 04:42:05.317 [INFO][5182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" Namespace="kube-system" Pod="coredns-674b8bbfcf-s98jm" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0" Jul 15 04:42:05.390951 containerd[2000]: 2025-07-15 04:42:05.325 [INFO][5182] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" Namespace="kube-system" Pod="coredns-674b8bbfcf-s98jm" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf77bfae-98bc-4de2-a9a7-e16472917425", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92", Pod:"coredns-674b8bbfcf-s98jm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a1c96c223e", MAC:"b6:47:ac:51:79:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:05.390951 containerd[2000]: 2025-07-15 04:42:05.365 [INFO][5182] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" Namespace="kube-system" Pod="coredns-674b8bbfcf-s98jm" WorkloadEndpoint="ip--172--31--20--207-k8s-coredns--674b8bbfcf--s98jm-eth0" Jul 15 04:42:05.509790 containerd[2000]: time="2025-07-15T04:42:05.509662806Z" level=info msg="connecting to shim 233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92" address="unix:///run/containerd/s/4b39197ac65e6d6e64da308b0ef7334046f3bb212530b8060bf72d8eef161685" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:42:05.538368 systemd-networkd[1831]: cali438d2a84415: Gained IPv6LL Jul 15 04:42:05.687555 systemd[1]: Started cri-containerd-233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92.scope - libcontainer container 233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92. Jul 15 04:42:05.792992 containerd[2000]: time="2025-07-15T04:42:05.792910543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-k6tnv,Uid:6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04,Namespace:calico-system,Attempt:0,}" Jul 15 04:42:05.796163 systemd-networkd[1831]: calid549afa262d: Gained IPv6LL Jul 15 04:42:05.884942 containerd[2000]: time="2025-07-15T04:42:05.884852852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s98jm,Uid:bf77bfae-98bc-4de2-a9a7-e16472917425,Namespace:kube-system,Attempt:0,} returns sandbox id \"233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92\"" Jul 15 04:42:05.900025 containerd[2000]: time="2025-07-15T04:42:05.899663600Z" level=info msg="CreateContainer within sandbox \"233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:42:05.940077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341506128.mount: Deactivated successfully. Jul 15 04:42:05.947048 containerd[2000]: time="2025-07-15T04:42:05.946367036Z" level=info msg="Container dcc3a2288a7966d1f3c992d8c6b4c33e2f1bbcfb853d7c0b2c14f126f79dd86a: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:42:05.959775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824397565.mount: Deactivated successfully. Jul 15 04:42:05.983774 containerd[2000]: time="2025-07-15T04:42:05.982273400Z" level=info msg="CreateContainer within sandbox \"233a71811b06c429af29864152fb5b7b960e85b502542120ecc6a1d2f6fabc92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcc3a2288a7966d1f3c992d8c6b4c33e2f1bbcfb853d7c0b2c14f126f79dd86a\"" Jul 15 04:42:05.984839 containerd[2000]: time="2025-07-15T04:42:05.984776396Z" level=info msg="StartContainer for \"dcc3a2288a7966d1f3c992d8c6b4c33e2f1bbcfb853d7c0b2c14f126f79dd86a\"" Jul 15 04:42:05.992231 containerd[2000]: time="2025-07-15T04:42:05.991803260Z" level=info msg="connecting to shim dcc3a2288a7966d1f3c992d8c6b4c33e2f1bbcfb853d7c0b2c14f126f79dd86a" address="unix:///run/containerd/s/4b39197ac65e6d6e64da308b0ef7334046f3bb212530b8060bf72d8eef161685" protocol=ttrpc version=3 Jul 15 04:42:06.076524 systemd[1]: Started cri-containerd-dcc3a2288a7966d1f3c992d8c6b4c33e2f1bbcfb853d7c0b2c14f126f79dd86a.scope - libcontainer container dcc3a2288a7966d1f3c992d8c6b4c33e2f1bbcfb853d7c0b2c14f126f79dd86a. Jul 15 04:42:06.240242 containerd[2000]: time="2025-07-15T04:42:06.239735129Z" level=info msg="StartContainer for \"dcc3a2288a7966d1f3c992d8c6b4c33e2f1bbcfb853d7c0b2c14f126f79dd86a\" returns successfully" Jul 15 04:42:06.322601 systemd-networkd[1831]: cali62867600839: Link UP Jul 15 04:42:06.329157 systemd-networkd[1831]: cali62867600839: Gained carrier Jul 15 04:42:06.407252 kubelet[3530]: I0715 04:42:06.406741 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s98jm" podStartSLOduration=48.406715742 podStartE2EDuration="48.406715742s" podCreationTimestamp="2025-07-15 04:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:42:06.364805574 +0000 UTC m=+53.809822024" watchObservedRunningTime="2025-07-15 04:42:06.406715742 +0000 UTC m=+53.851732192" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:05.975 [INFO][5268] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0 goldmane-768f4c5c69- calico-system 6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04 874 0 2025-07-15 04:41:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-20-207 goldmane-768f4c5c69-k6tnv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali62867600839 [] [] }} ContainerID="976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" Namespace="calico-system" Pod="goldmane-768f4c5c69-k6tnv" WorkloadEndpoint="ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:05.975 [INFO][5268] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" Namespace="calico-system" Pod="goldmane-768f4c5c69-k6tnv" WorkloadEndpoint="ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.103 [INFO][5291] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" HandleID="k8s-pod-network.976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" Workload="ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.106 [INFO][5291] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" HandleID="k8s-pod-network.976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" Workload="ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa4e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-207", "pod":"goldmane-768f4c5c69-k6tnv", "timestamp":"2025-07-15 04:42:06.103758089 +0000 UTC"}, Hostname:"ip-172-31-20-207", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.107 [INFO][5291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.109 [INFO][5291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.109 [INFO][5291] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-207' Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.163 [INFO][5291] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" host="ip-172-31-20-207" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.188 [INFO][5291] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-207" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.203 [INFO][5291] ipam/ipam.go 511: Trying affinity for 192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.213 [INFO][5291] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.226 [INFO][5291] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.226 [INFO][5291] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.64/26 handle="k8s-pod-network.976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" host="ip-172-31-20-207" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.232 [INFO][5291] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95 Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.247 [INFO][5291] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.64/26 handle="k8s-pod-network.976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" host="ip-172-31-20-207" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.277 [INFO][5291] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.70/26] block=192.168.41.64/26 handle="k8s-pod-network.976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" host="ip-172-31-20-207" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.281 [INFO][5291] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.70/26] handle="k8s-pod-network.976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" host="ip-172-31-20-207" Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.282 [INFO][5291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:42:06.426465 containerd[2000]: 2025-07-15 04:42:06.283 [INFO][5291] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.70/26] IPv6=[] ContainerID="976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" HandleID="k8s-pod-network.976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" Workload="ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0" Jul 15 04:42:06.430406 containerd[2000]: 2025-07-15 04:42:06.295 [INFO][5268] cni-plugin/k8s.go 418: Populated endpoint ContainerID="976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" Namespace="calico-system" Pod="goldmane-768f4c5c69-k6tnv" WorkloadEndpoint="ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"", Pod:"goldmane-768f4c5c69-k6tnv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.41.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62867600839", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:06.430406 containerd[2000]: 2025-07-15 04:42:06.295 [INFO][5268] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.70/32] ContainerID="976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" Namespace="calico-system" Pod="goldmane-768f4c5c69-k6tnv" WorkloadEndpoint="ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0" Jul 15 04:42:06.430406 containerd[2000]: 2025-07-15 04:42:06.298 [INFO][5268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62867600839 ContainerID="976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" Namespace="calico-system" Pod="goldmane-768f4c5c69-k6tnv" WorkloadEndpoint="ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0" Jul 15 04:42:06.430406 containerd[2000]: 2025-07-15 04:42:06.334 [INFO][5268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" Namespace="calico-system" Pod="goldmane-768f4c5c69-k6tnv" WorkloadEndpoint="ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0" Jul 15 04:42:06.430406 containerd[2000]: 2025-07-15 04:42:06.342 [INFO][5268] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" Namespace="calico-system" Pod="goldmane-768f4c5c69-k6tnv" WorkloadEndpoint="ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95", Pod:"goldmane-768f4c5c69-k6tnv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.41.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62867600839", MAC:"42:b1:04:a0:fb:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:06.430406 containerd[2000]: 2025-07-15 04:42:06.405 [INFO][5268] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" Namespace="calico-system" Pod="goldmane-768f4c5c69-k6tnv" WorkloadEndpoint="ip--172--31--20--207-k8s-goldmane--768f4c5c69--k6tnv-eth0" Jul 15 04:42:06.545863 containerd[2000]: time="2025-07-15T04:42:06.545592331Z" level=info msg="connecting to shim 976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95" address="unix:///run/containerd/s/518cddd89659c035513ea1ef79ca8ff3f1f8ccdca2b8acd43bb1aef850223207" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:42:06.733924 systemd[1]: Started cri-containerd-976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95.scope - libcontainer container 976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95. Jul 15 04:42:06.803149 containerd[2000]: time="2025-07-15T04:42:06.803001452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ff754f6c-68t68,Uid:eef457d8-766f-4e1c-ac69-dfbf58c54fe2,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:42:06.818758 systemd-networkd[1831]: cali5a1c96c223e: Gained IPv6LL Jul 15 04:42:07.461545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1529354057.mount: Deactivated successfully. Jul 15 04:42:07.515139 containerd[2000]: time="2025-07-15T04:42:07.514977164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:07.519297 containerd[2000]: time="2025-07-15T04:42:07.519200264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 15 04:42:07.523859 containerd[2000]: time="2025-07-15T04:42:07.523592144Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:07.531401 containerd[2000]: time="2025-07-15T04:42:07.531319100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:07.539002 containerd[2000]: time="2025-07-15T04:42:07.538714160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 4.058376032s" Jul 15 04:42:07.539002 containerd[2000]: time="2025-07-15T04:42:07.538778048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 15 04:42:07.544022 containerd[2000]: time="2025-07-15T04:42:07.543467096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 04:42:07.553965 containerd[2000]: time="2025-07-15T04:42:07.553888880Z" level=info msg="CreateContainer within sandbox \"226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 04:42:07.579221 containerd[2000]: time="2025-07-15T04:42:07.578660300Z" level=info msg="Container 966dc1ed25528e24714c8035c8ad7b2ae52cdddb21b223b2256582f91e12754c: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:42:07.603019 containerd[2000]: time="2025-07-15T04:42:07.602937524Z" level=info msg="CreateContainer within sandbox \"226155233c8832fa6b15d7732bcebf40487d86030501cd23e833380966671cf4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"966dc1ed25528e24714c8035c8ad7b2ae52cdddb21b223b2256582f91e12754c\"" Jul 15 04:42:07.607700 containerd[2000]: time="2025-07-15T04:42:07.607411316Z" level=info msg="StartContainer for \"966dc1ed25528e24714c8035c8ad7b2ae52cdddb21b223b2256582f91e12754c\"" Jul 15 04:42:07.614307 containerd[2000]: time="2025-07-15T04:42:07.614162048Z" level=info msg="connecting to shim 966dc1ed25528e24714c8035c8ad7b2ae52cdddb21b223b2256582f91e12754c" address="unix:///run/containerd/s/03644c169ed99a457134d8f6c1b150b267a48da0eb01b2b5c3a4caef4acadef7" protocol=ttrpc version=3 Jul 15 04:42:07.674529 systemd[1]: Started cri-containerd-966dc1ed25528e24714c8035c8ad7b2ae52cdddb21b223b2256582f91e12754c.scope - libcontainer container 966dc1ed25528e24714c8035c8ad7b2ae52cdddb21b223b2256582f91e12754c. Jul 15 04:42:07.743404 containerd[2000]: time="2025-07-15T04:42:07.742728897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-k6tnv,Uid:6c8bdd8a-b1aa-4a22-8cf7-cec4e017dc04,Namespace:calico-system,Attempt:0,} returns sandbox id \"976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95\"" Jul 15 04:42:07.793489 containerd[2000]: time="2025-07-15T04:42:07.793426797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ff754f6c-6pz6d,Uid:b5b59031-a976-4061-a747-bcf288f53e7c,Namespace:calico-apiserver,Attempt:0,}" Jul 15 04:42:07.875014 systemd-networkd[1831]: cali160fd6f1895: Link UP Jul 15 04:42:07.883887 systemd-networkd[1831]: cali160fd6f1895: Gained carrier Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.159 [INFO][5376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0 calico-apiserver-85ff754f6c- calico-apiserver eef457d8-766f-4e1c-ac69-dfbf58c54fe2 876 0 2025-07-15 04:41:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85ff754f6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-207 calico-apiserver-85ff754f6c-68t68 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali160fd6f1895 [] [] }} ContainerID="28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-68t68" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-" Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.162 [INFO][5376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-68t68" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0" Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.416 [INFO][5395] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" HandleID="k8s-pod-network.28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" Workload="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0" Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.417 [INFO][5395] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" HandleID="k8s-pod-network.28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" Workload="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-207", "pod":"calico-apiserver-85ff754f6c-68t68", "timestamp":"2025-07-15 04:42:07.416331871 +0000 UTC"}, Hostname:"ip-172-31-20-207", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.417 [INFO][5395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.417 [INFO][5395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.417 [INFO][5395] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-207' Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.583 [INFO][5395] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" host="ip-172-31-20-207" Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.661 [INFO][5395] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-207" Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.759 [INFO][5395] ipam/ipam.go 511: Trying affinity for 192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.767 [INFO][5395] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.773 [INFO][5395] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.774 [INFO][5395] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.64/26 handle="k8s-pod-network.28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" host="ip-172-31-20-207" Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.777 [INFO][5395] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.786 [INFO][5395] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.64/26 handle="k8s-pod-network.28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" host="ip-172-31-20-207" Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.813 [INFO][5395] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.71/26] block=192.168.41.64/26 handle="k8s-pod-network.28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" host="ip-172-31-20-207" Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.813 [INFO][5395] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.71/26] handle="k8s-pod-network.28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" host="ip-172-31-20-207" Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.813 [INFO][5395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:42:07.958443 containerd[2000]: 2025-07-15 04:42:07.813 [INFO][5395] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.71/26] IPv6=[] ContainerID="28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" HandleID="k8s-pod-network.28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" Workload="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0" Jul 15 04:42:07.960253 containerd[2000]: 2025-07-15 04:42:07.845 [INFO][5376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-68t68" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0", GenerateName:"calico-apiserver-85ff754f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"eef457d8-766f-4e1c-ac69-dfbf58c54fe2", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85ff754f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"", Pod:"calico-apiserver-85ff754f6c-68t68", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali160fd6f1895", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:07.960253 containerd[2000]: 2025-07-15 04:42:07.848 [INFO][5376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.71/32] ContainerID="28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-68t68" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0" Jul 15 04:42:07.960253 containerd[2000]: 2025-07-15 04:42:07.848 [INFO][5376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali160fd6f1895 ContainerID="28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-68t68" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0" Jul 15 04:42:07.960253 containerd[2000]: 2025-07-15 04:42:07.896 [INFO][5376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-68t68" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0" Jul 15 04:42:07.960253 containerd[2000]: 2025-07-15 04:42:07.899 [INFO][5376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-68t68" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0", GenerateName:"calico-apiserver-85ff754f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"eef457d8-766f-4e1c-ac69-dfbf58c54fe2", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85ff754f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc", Pod:"calico-apiserver-85ff754f6c-68t68", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali160fd6f1895", MAC:"ee:a8:f5:71:58:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:07.960253 containerd[2000]: 2025-07-15 04:42:07.951 [INFO][5376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-68t68" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--68t68-eth0" Jul 15 04:42:08.048900 containerd[2000]: time="2025-07-15T04:42:08.048663042Z" level=info msg="connecting to shim 28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc" address="unix:///run/containerd/s/2c2fc92afb839736d3ecdf4b291e46f7a4dfbb9a465a7565d046f42dedc752b3" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:42:08.163402 systemd-networkd[1831]: cali62867600839: Gained IPv6LL Jul 15 04:42:08.228635 systemd[1]: Started cri-containerd-28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc.scope - libcontainer container 28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc. Jul 15 04:42:08.375380 containerd[2000]: time="2025-07-15T04:42:08.375283424Z" level=info msg="StartContainer for \"966dc1ed25528e24714c8035c8ad7b2ae52cdddb21b223b2256582f91e12754c\" returns successfully" Jul 15 04:42:08.392062 systemd-networkd[1831]: calib824976aece: Link UP Jul 15 04:42:08.393809 systemd-networkd[1831]: calib824976aece: Gained carrier Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:07.971 [INFO][5436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0 calico-apiserver-85ff754f6c- calico-apiserver b5b59031-a976-4061-a747-bcf288f53e7c 873 0 2025-07-15 04:41:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85ff754f6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-207 calico-apiserver-85ff754f6c-6pz6d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib824976aece [] [] }} ContainerID="cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-6pz6d" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-" Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:07.971 [INFO][5436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-6pz6d" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0" Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.146 [INFO][5459] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" HandleID="k8s-pod-network.cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" Workload="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0" Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.148 [INFO][5459] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" HandleID="k8s-pod-network.cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" Workload="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-207", "pod":"calico-apiserver-85ff754f6c-6pz6d", "timestamp":"2025-07-15 04:42:08.146458891 +0000 UTC"}, Hostname:"ip-172-31-20-207", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.148 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.148 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.148 [INFO][5459] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-207' Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.181 [INFO][5459] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" host="ip-172-31-20-207" Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.213 [INFO][5459] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-207" Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.238 [INFO][5459] ipam/ipam.go 511: Trying affinity for 192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.280 [INFO][5459] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.291 [INFO][5459] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.64/26 host="ip-172-31-20-207" Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.293 [INFO][5459] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.64/26 handle="k8s-pod-network.cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" host="ip-172-31-20-207" Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.298 [INFO][5459] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3 Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.336 [INFO][5459] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.64/26 handle="k8s-pod-network.cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" host="ip-172-31-20-207" Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.363 [INFO][5459] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.72/26] block=192.168.41.64/26 handle="k8s-pod-network.cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" host="ip-172-31-20-207" Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.364 [INFO][5459] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.72/26] handle="k8s-pod-network.cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" host="ip-172-31-20-207" Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.364 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 04:42:08.438752 containerd[2000]: 2025-07-15 04:42:08.365 [INFO][5459] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.72/26] IPv6=[] ContainerID="cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" HandleID="k8s-pod-network.cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" Workload="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0" Jul 15 04:42:08.442551 containerd[2000]: 2025-07-15 04:42:08.375 [INFO][5436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-6pz6d" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0", GenerateName:"calico-apiserver-85ff754f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5b59031-a976-4061-a747-bcf288f53e7c", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85ff754f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"", Pod:"calico-apiserver-85ff754f6c-6pz6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib824976aece", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:08.442551 containerd[2000]: 2025-07-15 04:42:08.376 [INFO][5436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.72/32] ContainerID="cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-6pz6d" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0" Jul 15 04:42:08.442551 containerd[2000]: 2025-07-15 04:42:08.376 [INFO][5436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib824976aece ContainerID="cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-6pz6d" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0" Jul 15 04:42:08.442551 containerd[2000]: 2025-07-15 04:42:08.394 [INFO][5436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-6pz6d" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0" Jul 15 04:42:08.442551 containerd[2000]: 2025-07-15 04:42:08.397 [INFO][5436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-6pz6d" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0", GenerateName:"calico-apiserver-85ff754f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5b59031-a976-4061-a747-bcf288f53e7c", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 4, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85ff754f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-207", ContainerID:"cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3", Pod:"calico-apiserver-85ff754f6c-6pz6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib824976aece", MAC:"fa:17:bb:58:f8:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 04:42:08.442551 containerd[2000]: 2025-07-15 04:42:08.431 [INFO][5436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" Namespace="calico-apiserver" Pod="calico-apiserver-85ff754f6c-6pz6d" WorkloadEndpoint="ip--172--31--20--207-k8s-calico--apiserver--85ff754f6c--6pz6d-eth0" Jul 15 04:42:08.593043 containerd[2000]: time="2025-07-15T04:42:08.592893681Z" level=info msg="connecting to shim cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3" address="unix:///run/containerd/s/cfd4c17446c0fef1e0179f32b2881d06fd52145e457a0c1fd5de2678930aa93c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:42:08.674437 systemd[1]: Started cri-containerd-cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3.scope - libcontainer container cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3. Jul 15 04:42:09.003440 containerd[2000]: time="2025-07-15T04:42:09.002990635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ff754f6c-68t68,Uid:eef457d8-766f-4e1c-ac69-dfbf58c54fe2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc\"" Jul 15 04:42:09.027310 containerd[2000]: time="2025-07-15T04:42:09.027215875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85ff754f6c-6pz6d,Uid:b5b59031-a976-4061-a747-bcf288f53e7c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3\"" Jul 15 04:42:09.649400 containerd[2000]: time="2025-07-15T04:42:09.648151486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:09.654299 containerd[2000]: time="2025-07-15T04:42:09.654184762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 15 04:42:09.656694 containerd[2000]: time="2025-07-15T04:42:09.656398882Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:09.673540 containerd[2000]: time="2025-07-15T04:42:09.673021343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 2.128678523s" Jul 15 04:42:09.678499 containerd[2000]: time="2025-07-15T04:42:09.673089191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 15 04:42:09.678499 containerd[2000]: time="2025-07-15T04:42:09.673952363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:09.694134 containerd[2000]: time="2025-07-15T04:42:09.691016591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 04:42:09.705654 containerd[2000]: time="2025-07-15T04:42:09.705586247Z" level=info msg="CreateContainer within sandbox \"2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 04:42:09.750673 containerd[2000]: time="2025-07-15T04:42:09.750620855Z" level=info msg="Container dbc5fb916c6e2fd373176b157d4270385a332aa49ccc062d5bd84840df884e8a: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:42:09.779670 containerd[2000]: time="2025-07-15T04:42:09.779613083Z" level=info msg="CreateContainer within sandbox \"2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dbc5fb916c6e2fd373176b157d4270385a332aa49ccc062d5bd84840df884e8a\"" Jul 15 04:42:09.783335 containerd[2000]: time="2025-07-15T04:42:09.782375771Z" level=info msg="StartContainer for \"dbc5fb916c6e2fd373176b157d4270385a332aa49ccc062d5bd84840df884e8a\"" Jul 15 04:42:09.787179 containerd[2000]: time="2025-07-15T04:42:09.787013255Z" level=info msg="connecting to shim dbc5fb916c6e2fd373176b157d4270385a332aa49ccc062d5bd84840df884e8a" address="unix:///run/containerd/s/cdf644efc7e4e901ff7cc2b5001752ed06c4b83e8cabf466766d3b2ac69a977a" protocol=ttrpc version=3 Jul 15 04:42:09.858419 systemd[1]: Started cri-containerd-dbc5fb916c6e2fd373176b157d4270385a332aa49ccc062d5bd84840df884e8a.scope - libcontainer container dbc5fb916c6e2fd373176b157d4270385a332aa49ccc062d5bd84840df884e8a. Jul 15 04:42:09.890608 systemd-networkd[1831]: cali160fd6f1895: Gained IPv6LL Jul 15 04:42:10.070314 containerd[2000]: time="2025-07-15T04:42:10.070240893Z" level=info msg="StartContainer for \"dbc5fb916c6e2fd373176b157d4270385a332aa49ccc062d5bd84840df884e8a\" returns successfully" Jul 15 04:42:10.274745 systemd-networkd[1831]: calib824976aece: Gained IPv6LL Jul 15 04:42:12.590368 ntpd[1968]: Listen normally on 7 vxlan.calico 192.168.41.64:123 Jul 15 04:42:12.591504 ntpd[1968]: 15 Jul 04:42:12 ntpd[1968]: Listen normally on 7 vxlan.calico 192.168.41.64:123 Jul 15 04:42:12.591504 ntpd[1968]: 15 Jul 04:42:12 ntpd[1968]: Listen normally on 8 cali4c3e28317c0 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 04:42:12.591504 ntpd[1968]: 15 Jul 04:42:12 ntpd[1968]: Listen normally on 9 vxlan.calico [fe80::649e:c1ff:fe3a:6a42%5]:123 Jul 15 04:42:12.591504 ntpd[1968]: 15 Jul 04:42:12 ntpd[1968]: Listen normally on 10 cali75e3c6ba035 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 04:42:12.591504 ntpd[1968]: 15 Jul 04:42:12 ntpd[1968]: Listen normally on 11 cali438d2a84415 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 04:42:12.591504 ntpd[1968]: 15 Jul 04:42:12 ntpd[1968]: Listen normally on 12 calid549afa262d [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 04:42:12.590503 ntpd[1968]: Listen normally on 8 cali4c3e28317c0 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 04:42:12.590582 ntpd[1968]: Listen normally on 9 vxlan.calico [fe80::649e:c1ff:fe3a:6a42%5]:123 Jul 15 04:42:12.590651 ntpd[1968]: Listen normally on 10 cali75e3c6ba035 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 04:42:12.590715 ntpd[1968]: Listen normally on 11 cali438d2a84415 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 04:42:12.590779 ntpd[1968]: Listen normally on 12 calid549afa262d [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 04:42:12.592259 ntpd[1968]: Listen normally on 13 cali5a1c96c223e [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 04:42:12.593951 ntpd[1968]: 15 Jul 04:42:12 ntpd[1968]: Listen normally on 13 cali5a1c96c223e [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 04:42:12.593951 ntpd[1968]: 15 Jul 04:42:12 ntpd[1968]: Listen normally on 14 cali62867600839 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 04:42:12.593951 ntpd[1968]: 15 Jul 04:42:12 ntpd[1968]: Listen normally on 15 cali160fd6f1895 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 04:42:12.593951 ntpd[1968]: 15 Jul 04:42:12 ntpd[1968]: Listen normally on 16 calib824976aece [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 04:42:12.592453 ntpd[1968]: Listen normally on 14 cali62867600839 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 04:42:12.593258 ntpd[1968]: Listen normally on 15 cali160fd6f1895 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 04:42:12.593430 ntpd[1968]: Listen normally on 16 calib824976aece [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 04:42:15.068379 containerd[2000]: time="2025-07-15T04:42:15.068316577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:15.070711 containerd[2000]: time="2025-07-15T04:42:15.070653181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 15 04:42:15.074175 containerd[2000]: time="2025-07-15T04:42:15.074060989Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:15.080805 containerd[2000]: time="2025-07-15T04:42:15.080720533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:15.083401 containerd[2000]: time="2025-07-15T04:42:15.083276209Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 5.388902546s" Jul 15 04:42:15.083683 containerd[2000]: time="2025-07-15T04:42:15.083363185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 15 04:42:15.087509 containerd[2000]: time="2025-07-15T04:42:15.087373309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 04:42:15.131340 containerd[2000]: time="2025-07-15T04:42:15.131257610Z" level=info msg="CreateContainer within sandbox \"1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 04:42:15.161165 containerd[2000]: time="2025-07-15T04:42:15.160173050Z" level=info msg="Container cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:42:15.182531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3966657519.mount: Deactivated successfully. Jul 15 04:42:15.187259 containerd[2000]: time="2025-07-15T04:42:15.187161974Z" level=info msg="CreateContainer within sandbox \"1d0613930746b5046987810df6e29c146b01807e244b79e22185efe7a6e01c8a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8\"" Jul 15 04:42:15.190391 containerd[2000]: time="2025-07-15T04:42:15.190325126Z" level=info msg="StartContainer for \"cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8\"" Jul 15 04:42:15.197725 containerd[2000]: time="2025-07-15T04:42:15.197649962Z" level=info msg="connecting to shim cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8" address="unix:///run/containerd/s/192c578f0230b153bab2c2ad2c5cd5810db653e424d3cab184bd448fc936684d" protocol=ttrpc version=3 Jul 15 04:42:15.256744 systemd[1]: Started cri-containerd-cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8.scope - libcontainer container cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8. Jul 15 04:42:15.457767 containerd[2000]: time="2025-07-15T04:42:15.457370739Z" level=info msg="StartContainer for \"cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8\" returns successfully" Jul 15 04:42:16.530821 kubelet[3530]: I0715 04:42:16.530720 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6dbccc49dd-dj5bx" podStartSLOduration=10.38390113 podStartE2EDuration="16.530695781s" podCreationTimestamp="2025-07-15 04:42:00 +0000 UTC" firstStartedPulling="2025-07-15 04:42:01.396268645 +0000 UTC m=+48.841285071" lastFinishedPulling="2025-07-15 04:42:07.543063284 +0000 UTC m=+54.988079722" observedRunningTime="2025-07-15 04:42:09.402034197 +0000 UTC m=+56.847050671" watchObservedRunningTime="2025-07-15 04:42:16.530695781 +0000 UTC m=+63.975712315" Jul 15 04:42:16.960692 containerd[2000]: time="2025-07-15T04:42:16.960639535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8\" id:\"5cfa84563b65d66d87d8b50d4035d35c3432a3d7c002b0a3282f4187e968ae21\" pid:5687 exited_at:{seconds:1752554536 nanos:956176183}" Jul 15 04:42:17.062564 kubelet[3530]: I0715 04:42:17.062454 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58b5d5d888-64j2b" podStartSLOduration=25.763698776 podStartE2EDuration="36.062426307s" podCreationTimestamp="2025-07-15 04:41:41 +0000 UTC" firstStartedPulling="2025-07-15 04:42:04.788043294 +0000 UTC m=+52.233059720" lastFinishedPulling="2025-07-15 04:42:15.086770813 +0000 UTC m=+62.531787251" observedRunningTime="2025-07-15 04:42:16.546245681 +0000 UTC m=+63.991262215" watchObservedRunningTime="2025-07-15 04:42:17.062426307 +0000 UTC m=+64.507442745" Jul 15 04:42:17.600620 systemd[1]: Started sshd@9-172.31.20.207:22-139.178.89.65:37700.service - OpenSSH per-connection server daemon (139.178.89.65:37700). Jul 15 04:42:17.882967 sshd[5704]: Accepted publickey for core from 139.178.89.65 port 37700 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:17.891354 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:17.915956 systemd-logind[1973]: New session 10 of user core. Jul 15 04:42:17.926668 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 04:42:18.461988 sshd[5707]: Connection closed by 139.178.89.65 port 37700 Jul 15 04:42:18.463863 sshd-session[5704]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:18.478969 systemd[1]: sshd@9-172.31.20.207:22-139.178.89.65:37700.service: Deactivated successfully. Jul 15 04:42:18.487972 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 04:42:18.494783 systemd-logind[1973]: Session 10 logged out. Waiting for processes to exit. Jul 15 04:42:18.501185 systemd-logind[1973]: Removed session 10. Jul 15 04:42:18.524892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3413378193.mount: Deactivated successfully. Jul 15 04:42:19.804084 containerd[2000]: time="2025-07-15T04:42:19.803934741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:19.807350 containerd[2000]: time="2025-07-15T04:42:19.807262593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 15 04:42:19.810559 containerd[2000]: time="2025-07-15T04:42:19.810449421Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:19.821818 containerd[2000]: time="2025-07-15T04:42:19.821724669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:19.824353 containerd[2000]: time="2025-07-15T04:42:19.823514049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 4.736072352s" Jul 15 04:42:19.824353 containerd[2000]: time="2025-07-15T04:42:19.823577685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 15 04:42:19.832325 containerd[2000]: time="2025-07-15T04:42:19.832096221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:42:19.839902 containerd[2000]: time="2025-07-15T04:42:19.839835525Z" level=info msg="CreateContainer within sandbox \"976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 04:42:19.867399 containerd[2000]: time="2025-07-15T04:42:19.867326541Z" level=info msg="Container 570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:42:19.890788 containerd[2000]: time="2025-07-15T04:42:19.890716413Z" level=info msg="CreateContainer within sandbox \"976221f439b6eb2dacbfd6b828a524c504cca2f11ab3de421eb254dae710eb95\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60\"" Jul 15 04:42:19.892742 containerd[2000]: time="2025-07-15T04:42:19.892658253Z" level=info msg="StartContainer for \"570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60\"" Jul 15 04:42:19.899049 containerd[2000]: time="2025-07-15T04:42:19.898989213Z" level=info msg="connecting to shim 570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60" address="unix:///run/containerd/s/518cddd89659c035513ea1ef79ca8ff3f1f8ccdca2b8acd43bb1aef850223207" protocol=ttrpc version=3 Jul 15 04:42:19.959461 systemd[1]: Started cri-containerd-570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60.scope - libcontainer container 570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60. Jul 15 04:42:20.261494 containerd[2000]: time="2025-07-15T04:42:20.261170383Z" level=info msg="StartContainer for \"570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60\" returns successfully" Jul 15 04:42:20.525696 kubelet[3530]: I0715 04:42:20.525459 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-k6tnv" podStartSLOduration=27.449534396 podStartE2EDuration="39.525435248s" podCreationTimestamp="2025-07-15 04:41:41 +0000 UTC" firstStartedPulling="2025-07-15 04:42:07.753793833 +0000 UTC m=+55.198810271" lastFinishedPulling="2025-07-15 04:42:19.829694613 +0000 UTC m=+67.274711123" observedRunningTime="2025-07-15 04:42:20.521626508 +0000 UTC m=+67.966642970" watchObservedRunningTime="2025-07-15 04:42:20.525435248 +0000 UTC m=+67.970451686" Jul 15 04:42:22.954869 containerd[2000]: time="2025-07-15T04:42:22.954801961Z" level=info msg="TaskExit event in podsandbox handler container_id:\"570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60\" id:\"f2465f957e47dd137f1498806b415acf3e0489fd9e600232067b3c5e39dac4d7\" pid:5785 exit_status:1 exited_at:{seconds:1752554542 nanos:954260869}" Jul 15 04:42:23.232215 containerd[2000]: time="2025-07-15T04:42:23.231790918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:23.235587 containerd[2000]: time="2025-07-15T04:42:23.235510846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 15 04:42:23.237684 containerd[2000]: time="2025-07-15T04:42:23.237464206Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:23.245464 containerd[2000]: time="2025-07-15T04:42:23.245389474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:23.249742 containerd[2000]: time="2025-07-15T04:42:23.249671950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 3.417376961s" Jul 15 04:42:23.249742 containerd[2000]: time="2025-07-15T04:42:23.249737578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:42:23.253416 containerd[2000]: time="2025-07-15T04:42:23.253277638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 04:42:23.259737 containerd[2000]: time="2025-07-15T04:42:23.259660978Z" level=info msg="CreateContainer within sandbox \"28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:42:23.274135 containerd[2000]: time="2025-07-15T04:42:23.270569746Z" level=info msg="Container bf120fbd55b1de19d73ec6dac98609b468f024d3e5cd271d86bbf14ef0b23cb6: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:42:23.303409 containerd[2000]: time="2025-07-15T04:42:23.303330046Z" level=info msg="CreateContainer within sandbox \"28727cc06b484422b43f19740072e88bd97f0cf5c4f9da1f99e5f2c42f2dcccc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf120fbd55b1de19d73ec6dac98609b468f024d3e5cd271d86bbf14ef0b23cb6\"" Jul 15 04:42:23.306137 containerd[2000]: time="2025-07-15T04:42:23.305174878Z" level=info msg="StartContainer for \"bf120fbd55b1de19d73ec6dac98609b468f024d3e5cd271d86bbf14ef0b23cb6\"" Jul 15 04:42:23.311845 containerd[2000]: time="2025-07-15T04:42:23.310325122Z" level=info msg="connecting to shim bf120fbd55b1de19d73ec6dac98609b468f024d3e5cd271d86bbf14ef0b23cb6" address="unix:///run/containerd/s/2c2fc92afb839736d3ecdf4b291e46f7a4dfbb9a465a7565d046f42dedc752b3" protocol=ttrpc version=3 Jul 15 04:42:23.367832 systemd[1]: Started cri-containerd-bf120fbd55b1de19d73ec6dac98609b468f024d3e5cd271d86bbf14ef0b23cb6.scope - libcontainer container bf120fbd55b1de19d73ec6dac98609b468f024d3e5cd271d86bbf14ef0b23cb6. Jul 15 04:42:23.494839 containerd[2000]: time="2025-07-15T04:42:23.494613179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60\" id:\"78ac3f042711de1c8a4810d7f2528eacd1cb3caa0947d28118f194ad52fb8f92\" pid:5815 exit_status:1 exited_at:{seconds:1752554543 nanos:494191235}" Jul 15 04:42:23.512898 systemd[1]: Started sshd@10-172.31.20.207:22-139.178.89.65:33314.service - OpenSSH per-connection server daemon (139.178.89.65:33314). Jul 15 04:42:23.566352 containerd[2000]: time="2025-07-15T04:42:23.566275692Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:23.574619 containerd[2000]: time="2025-07-15T04:42:23.574539588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 04:42:23.600157 containerd[2000]: time="2025-07-15T04:42:23.597561984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 344.213258ms" Jul 15 04:42:23.600157 containerd[2000]: time="2025-07-15T04:42:23.597638160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 04:42:23.602803 containerd[2000]: time="2025-07-15T04:42:23.602493912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 04:42:23.615331 containerd[2000]: time="2025-07-15T04:42:23.615143352Z" level=info msg="CreateContainer within sandbox \"cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 04:42:23.657998 containerd[2000]: time="2025-07-15T04:42:23.651688572Z" level=info msg="Container c9980478af5697394234716e894f15e5915caf3ca845a3d7070e2f12143dc2b0: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:42:23.676854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135095744.mount: Deactivated successfully. Jul 15 04:42:23.699931 containerd[2000]: time="2025-07-15T04:42:23.699831432Z" level=info msg="CreateContainer within sandbox \"cac5c15394786a0046565db5d6a194f6fae2c68b2b8ea862d6acb92e218c5fc3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c9980478af5697394234716e894f15e5915caf3ca845a3d7070e2f12143dc2b0\"" Jul 15 04:42:23.703330 containerd[2000]: time="2025-07-15T04:42:23.703221012Z" level=info msg="StartContainer for \"c9980478af5697394234716e894f15e5915caf3ca845a3d7070e2f12143dc2b0\"" Jul 15 04:42:23.717877 containerd[2000]: time="2025-07-15T04:42:23.716568360Z" level=info msg="connecting to shim c9980478af5697394234716e894f15e5915caf3ca845a3d7070e2f12143dc2b0" address="unix:///run/containerd/s/cfd4c17446c0fef1e0179f32b2881d06fd52145e457a0c1fd5de2678930aa93c" protocol=ttrpc version=3 Jul 15 04:42:23.718410 containerd[2000]: time="2025-07-15T04:42:23.718286064Z" level=info msg="StartContainer for \"bf120fbd55b1de19d73ec6dac98609b468f024d3e5cd271d86bbf14ef0b23cb6\" returns successfully" Jul 15 04:42:23.789591 systemd[1]: Started cri-containerd-c9980478af5697394234716e894f15e5915caf3ca845a3d7070e2f12143dc2b0.scope - libcontainer container c9980478af5697394234716e894f15e5915caf3ca845a3d7070e2f12143dc2b0. Jul 15 04:42:23.828410 sshd[5849]: Accepted publickey for core from 139.178.89.65 port 33314 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:23.833462 sshd-session[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:23.849512 systemd-logind[1973]: New session 11 of user core. Jul 15 04:42:23.857501 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 04:42:24.194823 containerd[2000]: time="2025-07-15T04:42:24.194737799Z" level=info msg="StartContainer for \"c9980478af5697394234716e894f15e5915caf3ca845a3d7070e2f12143dc2b0\" returns successfully" Jul 15 04:42:24.333188 sshd[5881]: Connection closed by 139.178.89.65 port 33314 Jul 15 04:42:24.333734 sshd-session[5849]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:24.346087 systemd[1]: sshd@10-172.31.20.207:22-139.178.89.65:33314.service: Deactivated successfully. Jul 15 04:42:24.354009 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 04:42:24.361879 systemd-logind[1973]: Session 11 logged out. Waiting for processes to exit. Jul 15 04:42:24.373118 systemd-logind[1973]: Removed session 11. Jul 15 04:42:24.596299 kubelet[3530]: I0715 04:42:24.596081 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85ff754f6c-6pz6d" podStartSLOduration=38.028362884 podStartE2EDuration="52.596061109s" podCreationTimestamp="2025-07-15 04:41:32 +0000 UTC" firstStartedPulling="2025-07-15 04:42:09.031973587 +0000 UTC m=+56.476990013" lastFinishedPulling="2025-07-15 04:42:23.5996718 +0000 UTC m=+71.044688238" observedRunningTime="2025-07-15 04:42:24.595242649 +0000 UTC m=+72.040259087" watchObservedRunningTime="2025-07-15 04:42:24.596061109 +0000 UTC m=+72.041077571" Jul 15 04:42:25.573184 kubelet[3530]: I0715 04:42:25.572371 3530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:42:26.117413 containerd[2000]: time="2025-07-15T04:42:26.117336048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:26.121142 containerd[2000]: time="2025-07-15T04:42:26.120801384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 15 04:42:26.122833 containerd[2000]: time="2025-07-15T04:42:26.122775324Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:26.130139 containerd[2000]: time="2025-07-15T04:42:26.129740196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:42:26.132581 containerd[2000]: time="2025-07-15T04:42:26.132500928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.5299282s" Jul 15 04:42:26.132581 containerd[2000]: time="2025-07-15T04:42:26.132571872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 15 04:42:26.143247 containerd[2000]: time="2025-07-15T04:42:26.142702116Z" level=info msg="CreateContainer within sandbox \"2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 04:42:26.169551 containerd[2000]: time="2025-07-15T04:42:26.169497372Z" level=info msg="Container 5e4f4b1841395a6cd7cf99d1173f8220a7970b2736cc4daba0db76ddde94afaf: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:42:26.200925 containerd[2000]: time="2025-07-15T04:42:26.200844349Z" level=info msg="CreateContainer within sandbox \"2d52095442cb4aa5ec23ca3854790fd6eae106512bc27fc3d0bd581cbff42f9a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5e4f4b1841395a6cd7cf99d1173f8220a7970b2736cc4daba0db76ddde94afaf\"" Jul 15 04:42:26.202509 containerd[2000]: time="2025-07-15T04:42:26.201970681Z" level=info msg="StartContainer for \"5e4f4b1841395a6cd7cf99d1173f8220a7970b2736cc4daba0db76ddde94afaf\"" Jul 15 04:42:26.212164 containerd[2000]: time="2025-07-15T04:42:26.209288653Z" level=info msg="connecting to shim 5e4f4b1841395a6cd7cf99d1173f8220a7970b2736cc4daba0db76ddde94afaf" address="unix:///run/containerd/s/cdf644efc7e4e901ff7cc2b5001752ed06c4b83e8cabf466766d3b2ac69a977a" protocol=ttrpc version=3 Jul 15 04:42:26.292421 systemd[1]: Started cri-containerd-5e4f4b1841395a6cd7cf99d1173f8220a7970b2736cc4daba0db76ddde94afaf.scope - libcontainer container 5e4f4b1841395a6cd7cf99d1173f8220a7970b2736cc4daba0db76ddde94afaf. Jul 15 04:42:26.563402 containerd[2000]: time="2025-07-15T04:42:26.563070218Z" level=info msg="StartContainer for \"5e4f4b1841395a6cd7cf99d1173f8220a7970b2736cc4daba0db76ddde94afaf\" returns successfully" Jul 15 04:42:26.597323 kubelet[3530]: I0715 04:42:26.597270 3530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:42:26.649557 kubelet[3530]: I0715 04:42:26.647821 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85ff754f6c-68t68" podStartSLOduration=40.404590788 podStartE2EDuration="54.647796819s" podCreationTimestamp="2025-07-15 04:41:32 +0000 UTC" firstStartedPulling="2025-07-15 04:42:09.008251339 +0000 UTC m=+56.453267777" lastFinishedPulling="2025-07-15 04:42:23.251457382 +0000 UTC m=+70.696473808" observedRunningTime="2025-07-15 04:42:24.653880205 +0000 UTC m=+72.098896703" watchObservedRunningTime="2025-07-15 04:42:26.647796819 +0000 UTC m=+74.092813257" Jul 15 04:42:27.036310 kubelet[3530]: I0715 04:42:27.035549 3530 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 04:42:27.036310 kubelet[3530]: I0715 04:42:27.035647 3530 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 04:42:29.097145 kubelet[3530]: I0715 04:42:29.097050 3530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 04:42:29.377641 systemd[1]: Started sshd@11-172.31.20.207:22-139.178.89.65:43240.service - OpenSSH per-connection server daemon (139.178.89.65:43240). Jul 15 04:42:29.612333 sshd[5958]: Accepted publickey for core from 139.178.89.65 port 43240 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:29.615787 sshd-session[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:29.630331 systemd-logind[1973]: New session 12 of user core. Jul 15 04:42:29.640032 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 04:42:29.984901 sshd[5961]: Connection closed by 139.178.89.65 port 43240 Jul 15 04:42:29.984701 sshd-session[5958]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:29.996229 systemd[1]: sshd@11-172.31.20.207:22-139.178.89.65:43240.service: Deactivated successfully. Jul 15 04:42:30.003436 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 04:42:30.009847 systemd-logind[1973]: Session 12 logged out. Waiting for processes to exit. Jul 15 04:42:30.027572 systemd[1]: Started sshd@12-172.31.20.207:22-139.178.89.65:43248.service - OpenSSH per-connection server daemon (139.178.89.65:43248). Jul 15 04:42:30.031425 systemd-logind[1973]: Removed session 12. Jul 15 04:42:30.252780 sshd[5974]: Accepted publickey for core from 139.178.89.65 port 43248 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:30.255792 sshd-session[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:30.265135 systemd-logind[1973]: New session 13 of user core. Jul 15 04:42:30.273192 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 04:42:30.768302 sshd[5977]: Connection closed by 139.178.89.65 port 43248 Jul 15 04:42:30.769419 sshd-session[5974]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:30.781625 systemd-logind[1973]: Session 13 logged out. Waiting for processes to exit. Jul 15 04:42:30.782999 systemd[1]: sshd@12-172.31.20.207:22-139.178.89.65:43248.service: Deactivated successfully. Jul 15 04:42:30.797903 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 04:42:30.828401 systemd-logind[1973]: Removed session 13. Jul 15 04:42:30.834676 systemd[1]: Started sshd@13-172.31.20.207:22-139.178.89.65:43250.service - OpenSSH per-connection server daemon (139.178.89.65:43250). Jul 15 04:42:31.048147 sshd[5987]: Accepted publickey for core from 139.178.89.65 port 43250 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:31.051877 sshd-session[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:31.066924 systemd-logind[1973]: New session 14 of user core. Jul 15 04:42:31.073394 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 04:42:31.407652 sshd[5990]: Connection closed by 139.178.89.65 port 43250 Jul 15 04:42:31.408524 sshd-session[5987]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:31.422021 systemd[1]: sshd@13-172.31.20.207:22-139.178.89.65:43250.service: Deactivated successfully. Jul 15 04:42:31.430713 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 04:42:31.433079 systemd-logind[1973]: Session 14 logged out. Waiting for processes to exit. Jul 15 04:42:31.440928 systemd-logind[1973]: Removed session 14. Jul 15 04:42:33.736620 kubelet[3530]: I0715 04:42:33.736506 3530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ghdwd" podStartSLOduration=30.184107958 podStartE2EDuration="52.736479694s" podCreationTimestamp="2025-07-15 04:41:41 +0000 UTC" firstStartedPulling="2025-07-15 04:42:03.582280444 +0000 UTC m=+51.027296870" lastFinishedPulling="2025-07-15 04:42:26.134652168 +0000 UTC m=+73.579668606" observedRunningTime="2025-07-15 04:42:26.649955763 +0000 UTC m=+74.094972189" watchObservedRunningTime="2025-07-15 04:42:33.736479694 +0000 UTC m=+81.181496132" Jul 15 04:42:34.003793 containerd[2000]: time="2025-07-15T04:42:34.003151831Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea\" id:\"dba60342f2ef6c9be613f56e099cb4c4101c2964759720cdc617f36c3cd43b61\" pid:6019 exited_at:{seconds:1752554554 nanos:2436115}" Jul 15 04:42:36.448977 systemd[1]: Started sshd@14-172.31.20.207:22-139.178.89.65:43252.service - OpenSSH per-connection server daemon (139.178.89.65:43252). Jul 15 04:42:36.674983 sshd[6044]: Accepted publickey for core from 139.178.89.65 port 43252 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:36.678751 sshd-session[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:36.689578 systemd-logind[1973]: New session 15 of user core. Jul 15 04:42:36.696468 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 04:42:37.020962 sshd[6047]: Connection closed by 139.178.89.65 port 43252 Jul 15 04:42:37.022527 sshd-session[6044]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:37.031760 systemd[1]: sshd@14-172.31.20.207:22-139.178.89.65:43252.service: Deactivated successfully. Jul 15 04:42:37.032210 systemd-logind[1973]: Session 15 logged out. Waiting for processes to exit. Jul 15 04:42:37.040717 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 04:42:37.051187 systemd-logind[1973]: Removed session 15. Jul 15 04:42:42.059410 systemd[1]: Started sshd@15-172.31.20.207:22-139.178.89.65:35982.service - OpenSSH per-connection server daemon (139.178.89.65:35982). Jul 15 04:42:42.282156 sshd[6059]: Accepted publickey for core from 139.178.89.65 port 35982 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:42.286871 sshd-session[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:42.298972 systemd-logind[1973]: New session 16 of user core. Jul 15 04:42:42.307587 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 04:42:42.630206 sshd[6062]: Connection closed by 139.178.89.65 port 35982 Jul 15 04:42:42.631390 sshd-session[6059]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:42.639889 systemd-logind[1973]: Session 16 logged out. Waiting for processes to exit. Jul 15 04:42:42.640199 systemd[1]: sshd@15-172.31.20.207:22-139.178.89.65:35982.service: Deactivated successfully. Jul 15 04:42:42.644889 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 04:42:42.648406 systemd-logind[1973]: Removed session 16. Jul 15 04:42:46.570818 containerd[2000]: time="2025-07-15T04:42:46.570616570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8\" id:\"6348741e71b76f2c2cf6ca5654382b0d6e5d969c7c15f0fd842eea7d6ff61005\" pid:6092 exited_at:{seconds:1752554566 nanos:570065830}" Jul 15 04:42:47.670186 systemd[1]: Started sshd@16-172.31.20.207:22-139.178.89.65:35992.service - OpenSSH per-connection server daemon (139.178.89.65:35992). Jul 15 04:42:47.902902 sshd[6102]: Accepted publickey for core from 139.178.89.65 port 35992 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:47.906381 sshd-session[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:47.918371 systemd-logind[1973]: New session 17 of user core. Jul 15 04:42:47.924532 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 04:42:48.292019 sshd[6105]: Connection closed by 139.178.89.65 port 35992 Jul 15 04:42:48.292409 sshd-session[6102]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:48.303945 systemd[1]: sshd@16-172.31.20.207:22-139.178.89.65:35992.service: Deactivated successfully. Jul 15 04:42:48.304376 systemd-logind[1973]: Session 17 logged out. Waiting for processes to exit. Jul 15 04:42:48.311519 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 04:42:48.318296 systemd-logind[1973]: Removed session 17. Jul 15 04:42:53.339599 systemd[1]: Started sshd@17-172.31.20.207:22-139.178.89.65:47966.service - OpenSSH per-connection server daemon (139.178.89.65:47966). Jul 15 04:42:53.345767 containerd[2000]: time="2025-07-15T04:42:53.345703107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60\" id:\"b45b0207e76e0273792ec01884a200a1fd506bb37aeb22e5f279555766e29202\" pid:6131 exited_at:{seconds:1752554573 nanos:344957271}" Jul 15 04:42:53.566237 sshd[6143]: Accepted publickey for core from 139.178.89.65 port 47966 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:53.570530 sshd-session[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:53.585661 systemd-logind[1973]: New session 18 of user core. Jul 15 04:42:53.593437 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 04:42:53.879805 sshd[6148]: Connection closed by 139.178.89.65 port 47966 Jul 15 04:42:53.882048 sshd-session[6143]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:53.893480 systemd[1]: sshd@17-172.31.20.207:22-139.178.89.65:47966.service: Deactivated successfully. Jul 15 04:42:53.899808 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 04:42:53.903231 systemd-logind[1973]: Session 18 logged out. Waiting for processes to exit. Jul 15 04:42:53.926368 systemd[1]: Started sshd@18-172.31.20.207:22-139.178.89.65:47982.service - OpenSSH per-connection server daemon (139.178.89.65:47982). Jul 15 04:42:53.931210 systemd-logind[1973]: Removed session 18. Jul 15 04:42:54.138767 sshd[6161]: Accepted publickey for core from 139.178.89.65 port 47982 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:54.143163 sshd-session[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:54.157852 systemd-logind[1973]: New session 19 of user core. Jul 15 04:42:54.168427 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 04:42:54.943654 sshd[6164]: Connection closed by 139.178.89.65 port 47982 Jul 15 04:42:54.946772 sshd-session[6161]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:54.955965 systemd[1]: sshd@18-172.31.20.207:22-139.178.89.65:47982.service: Deactivated successfully. Jul 15 04:42:54.965490 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 04:42:54.968883 systemd-logind[1973]: Session 19 logged out. Waiting for processes to exit. Jul 15 04:42:54.995263 systemd[1]: Started sshd@19-172.31.20.207:22-139.178.89.65:47988.service - OpenSSH per-connection server daemon (139.178.89.65:47988). Jul 15 04:42:54.996958 systemd-logind[1973]: Removed session 19. Jul 15 04:42:55.211395 sshd[6174]: Accepted publickey for core from 139.178.89.65 port 47988 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:55.213081 sshd-session[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:55.225539 systemd-logind[1973]: New session 20 of user core. Jul 15 04:42:55.231428 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 04:42:57.071152 sshd[6177]: Connection closed by 139.178.89.65 port 47988 Jul 15 04:42:57.072556 sshd-session[6174]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:57.086620 systemd-logind[1973]: Session 20 logged out. Waiting for processes to exit. Jul 15 04:42:57.087631 systemd[1]: sshd@19-172.31.20.207:22-139.178.89.65:47988.service: Deactivated successfully. Jul 15 04:42:57.097655 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 04:42:57.124967 systemd-logind[1973]: Removed session 20. Jul 15 04:42:57.127578 systemd[1]: Started sshd@20-172.31.20.207:22-139.178.89.65:47998.service - OpenSSH per-connection server daemon (139.178.89.65:47998). Jul 15 04:42:57.345085 sshd[6195]: Accepted publickey for core from 139.178.89.65 port 47998 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:57.347539 sshd-session[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:57.359289 systemd-logind[1973]: New session 21 of user core. Jul 15 04:42:57.365474 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 04:42:58.050363 sshd[6202]: Connection closed by 139.178.89.65 port 47998 Jul 15 04:42:58.051246 sshd-session[6195]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:58.065784 systemd[1]: sshd@20-172.31.20.207:22-139.178.89.65:47998.service: Deactivated successfully. Jul 15 04:42:58.069840 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 04:42:58.073405 systemd-logind[1973]: Session 21 logged out. Waiting for processes to exit. Jul 15 04:42:58.100757 systemd[1]: Started sshd@21-172.31.20.207:22-139.178.89.65:48004.service - OpenSSH per-connection server daemon (139.178.89.65:48004). Jul 15 04:42:58.111772 systemd-logind[1973]: Removed session 21. Jul 15 04:42:58.322217 sshd[6212]: Accepted publickey for core from 139.178.89.65 port 48004 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:42:58.324556 sshd-session[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:42:58.337998 systemd-logind[1973]: New session 22 of user core. Jul 15 04:42:58.346508 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 04:42:58.649696 sshd[6215]: Connection closed by 139.178.89.65 port 48004 Jul 15 04:42:58.650819 sshd-session[6212]: pam_unix(sshd:session): session closed for user core Jul 15 04:42:58.668324 systemd[1]: sshd@21-172.31.20.207:22-139.178.89.65:48004.service: Deactivated successfully. Jul 15 04:42:58.678047 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 04:42:58.684247 systemd-logind[1973]: Session 22 logged out. Waiting for processes to exit. Jul 15 04:42:58.688758 systemd-logind[1973]: Removed session 22. Jul 15 04:43:03.693284 systemd[1]: Started sshd@22-172.31.20.207:22-139.178.89.65:49904.service - OpenSSH per-connection server daemon (139.178.89.65:49904). Jul 15 04:43:03.898957 containerd[2000]: time="2025-07-15T04:43:03.898908352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea\" id:\"0a0575b8446162bb80c1f0adda9909d0a351f23198f3c5098f2efa617b4f0ccf\" pid:6240 exited_at:{seconds:1752554583 nanos:898411276}" Jul 15 04:43:03.906951 sshd[6247]: Accepted publickey for core from 139.178.89.65 port 49904 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:43:03.911441 sshd-session[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:03.920680 systemd-logind[1973]: New session 23 of user core. Jul 15 04:43:03.928686 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 04:43:04.291133 sshd[6255]: Connection closed by 139.178.89.65 port 49904 Jul 15 04:43:04.291610 sshd-session[6247]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:04.300368 systemd[1]: sshd@22-172.31.20.207:22-139.178.89.65:49904.service: Deactivated successfully. Jul 15 04:43:04.305701 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 04:43:04.316341 systemd-logind[1973]: Session 23 logged out. Waiting for processes to exit. Jul 15 04:43:04.319873 systemd-logind[1973]: Removed session 23. Jul 15 04:43:09.329573 systemd[1]: Started sshd@23-172.31.20.207:22-139.178.89.65:45556.service - OpenSSH per-connection server daemon (139.178.89.65:45556). Jul 15 04:43:09.561723 sshd[6272]: Accepted publickey for core from 139.178.89.65 port 45556 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:43:09.566747 sshd-session[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:09.583050 systemd-logind[1973]: New session 24 of user core. Jul 15 04:43:09.587460 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 04:43:09.916445 sshd[6275]: Connection closed by 139.178.89.65 port 45556 Jul 15 04:43:09.920290 sshd-session[6272]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:09.933058 systemd[1]: sshd@23-172.31.20.207:22-139.178.89.65:45556.service: Deactivated successfully. Jul 15 04:43:09.941868 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 04:43:09.946085 systemd-logind[1973]: Session 24 logged out. Waiting for processes to exit. Jul 15 04:43:09.949380 systemd-logind[1973]: Removed session 24. Jul 15 04:43:11.606013 update_engine[1974]: I20250715 04:43:11.605093 1974 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 15 04:43:11.606013 update_engine[1974]: I20250715 04:43:11.605199 1974 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 15 04:43:11.606013 update_engine[1974]: I20250715 04:43:11.605669 1974 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 15 04:43:11.607690 update_engine[1974]: I20250715 04:43:11.607576 1974 omaha_request_params.cc:62] Current group set to developer Jul 15 04:43:11.607955 update_engine[1974]: I20250715 04:43:11.607919 1974 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 15 04:43:11.608051 update_engine[1974]: I20250715 04:43:11.608021 1974 update_attempter.cc:643] Scheduling an action processor start. Jul 15 04:43:11.608769 update_engine[1974]: I20250715 04:43:11.608186 1974 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 15 04:43:11.608769 update_engine[1974]: I20250715 04:43:11.608263 1974 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 15 04:43:11.608769 update_engine[1974]: I20250715 04:43:11.608381 1974 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 15 04:43:11.608769 update_engine[1974]: I20250715 04:43:11.608399 1974 omaha_request_action.cc:272] Request: Jul 15 04:43:11.608769 update_engine[1974]: Jul 15 04:43:11.608769 update_engine[1974]: Jul 15 04:43:11.608769 update_engine[1974]: Jul 15 04:43:11.608769 update_engine[1974]: Jul 15 04:43:11.608769 update_engine[1974]: Jul 15 04:43:11.608769 update_engine[1974]: Jul 15 04:43:11.608769 update_engine[1974]: Jul 15 04:43:11.608769 update_engine[1974]: Jul 15 04:43:11.608769 update_engine[1974]: I20250715 04:43:11.608414 1974 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 15 04:43:11.625233 update_engine[1974]: I20250715 04:43:11.624684 1974 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 15 04:43:11.626927 update_engine[1974]: I20250715 04:43:11.626765 1974 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 15 04:43:11.627966 locksmithd[2026]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 15 04:43:11.637145 update_engine[1974]: E20250715 04:43:11.636524 1974 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 15 04:43:11.637351 update_engine[1974]: I20250715 04:43:11.636662 1974 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 15 04:43:14.955774 systemd[1]: Started sshd@24-172.31.20.207:22-139.178.89.65:45570.service - OpenSSH per-connection server daemon (139.178.89.65:45570). Jul 15 04:43:15.171980 sshd[6289]: Accepted publickey for core from 139.178.89.65 port 45570 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:43:15.174709 sshd-session[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:15.185406 systemd-logind[1973]: New session 25 of user core. Jul 15 04:43:15.194069 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 04:43:15.488854 sshd[6292]: Connection closed by 139.178.89.65 port 45570 Jul 15 04:43:15.489747 sshd-session[6289]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:15.499229 systemd[1]: sshd@24-172.31.20.207:22-139.178.89.65:45570.service: Deactivated successfully. Jul 15 04:43:15.510238 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 04:43:15.519773 systemd-logind[1973]: Session 25 logged out. Waiting for processes to exit. Jul 15 04:43:15.525019 systemd-logind[1973]: Removed session 25. Jul 15 04:43:16.705905 containerd[2000]: time="2025-07-15T04:43:16.705512236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8\" id:\"47054c6bf35a4448a217b6f0b10c9696cb6eeb4fa767929ef84fede6fada0b38\" pid:6314 exited_at:{seconds:1752554596 nanos:705092848}" Jul 15 04:43:18.589182 containerd[2000]: time="2025-07-15T04:43:18.589091153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60\" id:\"da19c2eef1f342d8888ec2c9dc26325e77ebcaa5f74645b3e9495260d4056e7e\" pid:6336 exited_at:{seconds:1752554598 nanos:588718469}" Jul 15 04:43:20.526693 systemd[1]: Started sshd@25-172.31.20.207:22-139.178.89.65:53916.service - OpenSSH per-connection server daemon (139.178.89.65:53916). Jul 15 04:43:20.731033 sshd[6349]: Accepted publickey for core from 139.178.89.65 port 53916 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:43:20.733755 sshd-session[6349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:20.745842 systemd-logind[1973]: New session 26 of user core. Jul 15 04:43:20.752742 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 04:43:21.078145 sshd[6352]: Connection closed by 139.178.89.65 port 53916 Jul 15 04:43:21.079425 sshd-session[6349]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:21.090061 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 04:43:21.096797 systemd[1]: sshd@25-172.31.20.207:22-139.178.89.65:53916.service: Deactivated successfully. Jul 15 04:43:21.106223 systemd-logind[1973]: Session 26 logged out. Waiting for processes to exit. Jul 15 04:43:21.112433 systemd-logind[1973]: Removed session 26. Jul 15 04:43:21.603281 update_engine[1974]: I20250715 04:43:21.603186 1974 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 15 04:43:21.603807 update_engine[1974]: I20250715 04:43:21.603560 1974 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 15 04:43:21.604036 update_engine[1974]: I20250715 04:43:21.603965 1974 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 15 04:43:21.612028 update_engine[1974]: E20250715 04:43:21.611935 1974 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 15 04:43:21.612252 update_engine[1974]: I20250715 04:43:21.612049 1974 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 15 04:43:21.959291 containerd[2000]: time="2025-07-15T04:43:21.959139454Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8\" id:\"04fb2bf103a017297eb2ef7bbbddd26ad169fe2a79cb52f2abcc1bdc5011f9e2\" pid:6375 exited_at:{seconds:1752554601 nanos:958451050}" Jul 15 04:43:23.200796 containerd[2000]: time="2025-07-15T04:43:23.200731544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60\" id:\"3addc3598d7d3cf7a012321bce8f9a7da9ca67f2c12b49e3970b41059c972519\" pid:6403 exited_at:{seconds:1752554603 nanos:200294312}" Jul 15 04:43:26.117556 systemd[1]: Started sshd@26-172.31.20.207:22-139.178.89.65:53920.service - OpenSSH per-connection server daemon (139.178.89.65:53920). Jul 15 04:43:26.312354 sshd[6415]: Accepted publickey for core from 139.178.89.65 port 53920 ssh2: RSA SHA256:OM8Z8cK0hFjQDS+avOAag4EvUCsx3+0prlBsjg6IecE Jul 15 04:43:26.315686 sshd-session[6415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:43:26.327517 systemd-logind[1973]: New session 27 of user core. Jul 15 04:43:26.335521 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 15 04:43:26.633434 sshd[6418]: Connection closed by 139.178.89.65 port 53920 Jul 15 04:43:26.634338 sshd-session[6415]: pam_unix(sshd:session): session closed for user core Jul 15 04:43:26.643061 systemd[1]: sshd@26-172.31.20.207:22-139.178.89.65:53920.service: Deactivated successfully. Jul 15 04:43:26.650004 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 04:43:26.654266 systemd-logind[1973]: Session 27 logged out. Waiting for processes to exit. Jul 15 04:43:26.659064 systemd-logind[1973]: Removed session 27. Jul 15 04:43:31.608065 update_engine[1974]: I20250715 04:43:31.607178 1974 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 15 04:43:31.608065 update_engine[1974]: I20250715 04:43:31.607547 1974 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 15 04:43:31.608065 update_engine[1974]: I20250715 04:43:31.607966 1974 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 15 04:43:31.610323 update_engine[1974]: E20250715 04:43:31.610265 1974 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 15 04:43:31.610578 update_engine[1974]: I20250715 04:43:31.610531 1974 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 15 04:43:33.846791 containerd[2000]: time="2025-07-15T04:43:33.846725037Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea\" id:\"921defc6070cf8f21adda768b8ece70573b4bbf2bb233c854419d9f4f0f29580\" pid:6445 exit_status:1 exited_at:{seconds:1752554613 nanos:845907117}" Jul 15 04:43:41.613007 update_engine[1974]: I20250715 04:43:41.612162 1974 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 15 04:43:41.613007 update_engine[1974]: I20250715 04:43:41.612525 1974 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 15 04:43:41.613007 update_engine[1974]: I20250715 04:43:41.612936 1974 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 15 04:43:41.614586 update_engine[1974]: E20250715 04:43:41.614517 1974 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 15 04:43:41.614716 update_engine[1974]: I20250715 04:43:41.614615 1974 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 15 04:43:41.614716 update_engine[1974]: I20250715 04:43:41.614635 1974 omaha_request_action.cc:617] Omaha request response: Jul 15 04:43:41.614816 update_engine[1974]: E20250715 04:43:41.614755 1974 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 15 04:43:41.614816 update_engine[1974]: I20250715 04:43:41.614794 1974 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 15 04:43:41.614910 update_engine[1974]: I20250715 04:43:41.614809 1974 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 15 04:43:41.614910 update_engine[1974]: I20250715 04:43:41.614823 1974 update_attempter.cc:306] Processing Done. Jul 15 04:43:41.614910 update_engine[1974]: E20250715 04:43:41.614850 1974 update_attempter.cc:619] Update failed. Jul 15 04:43:41.614910 update_engine[1974]: I20250715 04:43:41.614866 1974 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 15 04:43:41.614910 update_engine[1974]: I20250715 04:43:41.614879 1974 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 15 04:43:41.614910 update_engine[1974]: I20250715 04:43:41.614893 1974 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 15 04:43:41.615214 update_engine[1974]: I20250715 04:43:41.614997 1974 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 15 04:43:41.615214 update_engine[1974]: I20250715 04:43:41.615038 1974 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 15 04:43:41.615214 update_engine[1974]: I20250715 04:43:41.615056 1974 omaha_request_action.cc:272] Request: Jul 15 04:43:41.615214 update_engine[1974]: Jul 15 04:43:41.615214 update_engine[1974]: Jul 15 04:43:41.615214 update_engine[1974]: Jul 15 04:43:41.615214 update_engine[1974]: Jul 15 04:43:41.615214 update_engine[1974]: Jul 15 04:43:41.615214 update_engine[1974]: Jul 15 04:43:41.615214 update_engine[1974]: I20250715 04:43:41.615072 1974 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 15 04:43:41.615652 update_engine[1974]: I20250715 04:43:41.615360 1974 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 15 04:43:41.615749 update_engine[1974]: I20250715 04:43:41.615698 1974 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 15 04:43:41.616674 update_engine[1974]: E20250715 04:43:41.616474 1974 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 15 04:43:41.616801 update_engine[1974]: I20250715 04:43:41.616735 1974 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 15 04:43:41.616801 update_engine[1974]: I20250715 04:43:41.616758 1974 omaha_request_action.cc:617] Omaha request response: Jul 15 04:43:41.616898 update_engine[1974]: I20250715 04:43:41.616775 1974 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 15 04:43:41.616898 update_engine[1974]: I20250715 04:43:41.616820 1974 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 15 04:43:41.616898 update_engine[1974]: I20250715 04:43:41.616837 1974 update_attempter.cc:306] Processing Done. Jul 15 04:43:41.617045 update_engine[1974]: I20250715 04:43:41.616858 1974 update_attempter.cc:310] Error event sent. Jul 15 04:43:41.617045 update_engine[1974]: I20250715 04:43:41.616938 1974 update_check_scheduler.cc:74] Next update check in 40m46s Jul 15 04:43:41.617811 locksmithd[2026]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 15 04:43:41.617811 locksmithd[2026]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 15 04:43:46.559898 containerd[2000]: time="2025-07-15T04:43:46.559716572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8\" id:\"4b5cfeacc557369fb1abc8fe17afbe69d8e9d45c40802c68acd229273bb3e3ea\" pid:6493 exited_at:{seconds:1752554626 nanos:558368564}" Jul 15 04:43:53.343281 containerd[2000]: time="2025-07-15T04:43:53.342926605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60\" id:\"5c4870231e3b98be6e475d6f3625e21d095bb04b39516052ba0545d5cf21edc4\" pid:6516 exited_at:{seconds:1752554633 nanos:342340813}" Jul 15 04:44:03.774623 containerd[2000]: time="2025-07-15T04:44:03.773922829Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5350b4402b30a84375857933b3837db63955567b55dce7dc6e90800571154dea\" id:\"1db2745b20615353352d78d295a2329f290bd83bc4e0e6d06e17f9a7fc095c04\" pid:6540 exit_status:1 exited_at:{seconds:1752554643 nanos:773374141}" Jul 15 04:44:13.325524 systemd[1]: cri-containerd-e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b.scope: Deactivated successfully. Jul 15 04:44:13.327211 systemd[1]: cri-containerd-e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b.scope: Consumed 7.801s CPU time, 62.6M memory peak, 128K read from disk. Jul 15 04:44:13.337453 containerd[2000]: time="2025-07-15T04:44:13.336091185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b\" id:\"e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b\" pid:3147 exit_status:1 exited_at:{seconds:1752554653 nanos:334706565}" Jul 15 04:44:13.337453 containerd[2000]: time="2025-07-15T04:44:13.337055925Z" level=info msg="received exit event container_id:\"e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b\" id:\"e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b\" pid:3147 exit_status:1 exited_at:{seconds:1752554653 nanos:334706565}" Jul 15 04:44:13.382381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b-rootfs.mount: Deactivated successfully. Jul 15 04:44:13.970366 systemd[1]: cri-containerd-1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590.scope: Deactivated successfully. Jul 15 04:44:13.970942 systemd[1]: cri-containerd-1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590.scope: Consumed 35.746s CPU time, 112.3M memory peak, 528K read from disk. Jul 15 04:44:13.979145 containerd[2000]: time="2025-07-15T04:44:13.979026648Z" level=info msg="received exit event container_id:\"1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590\" id:\"1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590\" pid:3856 exit_status:1 exited_at:{seconds:1752554653 nanos:978335868}" Jul 15 04:44:13.979581 containerd[2000]: time="2025-07-15T04:44:13.979351332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590\" id:\"1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590\" pid:3856 exit_status:1 exited_at:{seconds:1752554653 nanos:978335868}" Jul 15 04:44:14.019799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590-rootfs.mount: Deactivated successfully. Jul 15 04:44:14.039961 kubelet[3530]: I0715 04:44:14.039848 3530 scope.go:117] "RemoveContainer" containerID="e656d37c0c02cee74831adccb6d13409b8d79a2f9216f7e4ca43e1b07f83894b" Jul 15 04:44:14.048941 containerd[2000]: time="2025-07-15T04:44:14.048893336Z" level=info msg="CreateContainer within sandbox \"3bbe6ebef381e18ee6751770431d3250ad5ef37df8370caa622691def0fa202d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 15 04:44:14.070133 containerd[2000]: time="2025-07-15T04:44:14.069391520Z" level=info msg="Container 2f9981d7f2c319aa18c4373fb6d383a43a45871736c597f63b54a9d24c60c70a: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:14.090562 containerd[2000]: time="2025-07-15T04:44:14.090432993Z" level=info msg="CreateContainer within sandbox \"3bbe6ebef381e18ee6751770431d3250ad5ef37df8370caa622691def0fa202d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2f9981d7f2c319aa18c4373fb6d383a43a45871736c597f63b54a9d24c60c70a\"" Jul 15 04:44:14.091307 containerd[2000]: time="2025-07-15T04:44:14.091268409Z" level=info msg="StartContainer for \"2f9981d7f2c319aa18c4373fb6d383a43a45871736c597f63b54a9d24c60c70a\"" Jul 15 04:44:14.094030 containerd[2000]: time="2025-07-15T04:44:14.093920085Z" level=info msg="connecting to shim 2f9981d7f2c319aa18c4373fb6d383a43a45871736c597f63b54a9d24c60c70a" address="unix:///run/containerd/s/2b884b488971e83ebde03d3d3147a7d474367ffcd48baf1431367ff1335400ae" protocol=ttrpc version=3 Jul 15 04:44:14.136417 systemd[1]: Started cri-containerd-2f9981d7f2c319aa18c4373fb6d383a43a45871736c597f63b54a9d24c60c70a.scope - libcontainer container 2f9981d7f2c319aa18c4373fb6d383a43a45871736c597f63b54a9d24c60c70a. Jul 15 04:44:14.221487 containerd[2000]: time="2025-07-15T04:44:14.221329761Z" level=info msg="StartContainer for \"2f9981d7f2c319aa18c4373fb6d383a43a45871736c597f63b54a9d24c60c70a\" returns successfully" Jul 15 04:44:15.062918 kubelet[3530]: I0715 04:44:15.061842 3530 scope.go:117] "RemoveContainer" containerID="1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590" Jul 15 04:44:15.069250 containerd[2000]: time="2025-07-15T04:44:15.068474109Z" level=info msg="CreateContainer within sandbox \"32fea0b7869917c7423c44ad822809b0047e5a1127ef039b5b907136d9c32def\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 15 04:44:15.090336 containerd[2000]: time="2025-07-15T04:44:15.088562817Z" level=info msg="Container b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:15.107688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2000116471.mount: Deactivated successfully. Jul 15 04:44:15.115422 containerd[2000]: time="2025-07-15T04:44:15.115242022Z" level=info msg="CreateContainer within sandbox \"32fea0b7869917c7423c44ad822809b0047e5a1127ef039b5b907136d9c32def\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342\"" Jul 15 04:44:15.116668 containerd[2000]: time="2025-07-15T04:44:15.116613262Z" level=info msg="StartContainer for \"b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342\"" Jul 15 04:44:15.119074 containerd[2000]: time="2025-07-15T04:44:15.119024134Z" level=info msg="connecting to shim b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342" address="unix:///run/containerd/s/6e6df6ca0da2298a0aaceb7626455687a366ffdf27cce2ed049a697c83c48db3" protocol=ttrpc version=3 Jul 15 04:44:15.169582 systemd[1]: Started cri-containerd-b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342.scope - libcontainer container b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342. Jul 15 04:44:15.239312 containerd[2000]: time="2025-07-15T04:44:15.239213854Z" level=info msg="StartContainer for \"b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342\" returns successfully" Jul 15 04:44:15.883298 kubelet[3530]: E0715 04:44:15.882765 3530 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-207?timeout=10s\": context deadline exceeded" Jul 15 04:44:16.633638 containerd[2000]: time="2025-07-15T04:44:16.633573181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8\" id:\"a5725ac94b3fd92e8bc8784a12deb97a3e234ae9955212220e5c7cbd9901c364\" pid:6653 exit_status:1 exited_at:{seconds:1752554656 nanos:632334973}" Jul 15 04:44:17.225964 systemd[1]: cri-containerd-244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020.scope: Deactivated successfully. Jul 15 04:44:17.226593 systemd[1]: cri-containerd-244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020.scope: Consumed 5.452s CPU time, 20.3M memory peak, 172K read from disk. Jul 15 04:44:17.234671 containerd[2000]: time="2025-07-15T04:44:17.234541932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020\" id:\"244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020\" pid:3196 exit_status:1 exited_at:{seconds:1752554657 nanos:233715576}" Jul 15 04:44:17.235037 containerd[2000]: time="2025-07-15T04:44:17.234644856Z" level=info msg="received exit event container_id:\"244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020\" id:\"244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020\" pid:3196 exit_status:1 exited_at:{seconds:1752554657 nanos:233715576}" Jul 15 04:44:17.278208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020-rootfs.mount: Deactivated successfully. Jul 15 04:44:18.080044 kubelet[3530]: I0715 04:44:18.079956 3530 scope.go:117] "RemoveContainer" containerID="244585f84c22dcd9e3154bbfbeef366a3d71bcfd9e1c7453b143e6e52d465020" Jul 15 04:44:18.084181 containerd[2000]: time="2025-07-15T04:44:18.084096960Z" level=info msg="CreateContainer within sandbox \"9c656914335b05cfe10ec4da902c21aff970af597094b380c3c167c93108060f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 15 04:44:18.101273 containerd[2000]: time="2025-07-15T04:44:18.099547272Z" level=info msg="Container 44862da872e088c449f680778bb10cab96133d71e8a01c15c1e4859f873314a2: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:44:18.117378 containerd[2000]: time="2025-07-15T04:44:18.117327733Z" level=info msg="CreateContainer within sandbox \"9c656914335b05cfe10ec4da902c21aff970af597094b380c3c167c93108060f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"44862da872e088c449f680778bb10cab96133d71e8a01c15c1e4859f873314a2\"" Jul 15 04:44:18.118655 containerd[2000]: time="2025-07-15T04:44:18.118605661Z" level=info msg="StartContainer for \"44862da872e088c449f680778bb10cab96133d71e8a01c15c1e4859f873314a2\"" Jul 15 04:44:18.120933 containerd[2000]: time="2025-07-15T04:44:18.120863269Z" level=info msg="connecting to shim 44862da872e088c449f680778bb10cab96133d71e8a01c15c1e4859f873314a2" address="unix:///run/containerd/s/52edcbbec7fee63f7f4070c7c41b2828cd6dca62da7791e359a2ba712ffe9405" protocol=ttrpc version=3 Jul 15 04:44:18.161417 systemd[1]: Started cri-containerd-44862da872e088c449f680778bb10cab96133d71e8a01c15c1e4859f873314a2.scope - libcontainer container 44862da872e088c449f680778bb10cab96133d71e8a01c15c1e4859f873314a2. Jul 15 04:44:18.238317 containerd[2000]: time="2025-07-15T04:44:18.238134433Z" level=info msg="StartContainer for \"44862da872e088c449f680778bb10cab96133d71e8a01c15c1e4859f873314a2\" returns successfully" Jul 15 04:44:18.568457 containerd[2000]: time="2025-07-15T04:44:18.568397091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60\" id:\"c1425d50df23e3f0fa1383f8da23619f89de7264d532e507cd117295377ba045\" pid:6717 exited_at:{seconds:1752554658 nanos:567484707}" Jul 15 04:44:21.929239 containerd[2000]: time="2025-07-15T04:44:21.929139091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cecc0c945400aea1cb9d24a90e112d04c6fb85572e69d3a28625610fac965ef8\" id:\"acb3fedd8fa4206a23a80d63cfdd769bf6109c52aa010b5ed179dca44e3ba024\" pid:6743 exit_status:1 exited_at:{seconds:1752554661 nanos:927991555}" Jul 15 04:44:23.114527 containerd[2000]: time="2025-07-15T04:44:23.114458537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"570bd525704abc0c40eca1a6b18b9a1c01ca60c381d16f54e8f2961bac808a60\" id:\"e733ca84222427695af6158444b3e9079fec2f5a641fe9a1fa392d17826eac04\" pid:6766 exited_at:{seconds:1752554663 nanos:114023501}" Jul 15 04:44:25.883433 kubelet[3530]: E0715 04:44:25.883349 3530 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-207?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 15 04:44:26.771632 systemd[1]: cri-containerd-b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342.scope: Deactivated successfully. Jul 15 04:44:26.775662 containerd[2000]: time="2025-07-15T04:44:26.775544496Z" level=info msg="received exit event container_id:\"b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342\" id:\"b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342\" pid:6619 exit_status:1 exited_at:{seconds:1752554666 nanos:775032732}" Jul 15 04:44:26.776623 containerd[2000]: time="2025-07-15T04:44:26.776348496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342\" id:\"b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342\" pid:6619 exit_status:1 exited_at:{seconds:1752554666 nanos:775032732}" Jul 15 04:44:26.822009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342-rootfs.mount: Deactivated successfully. Jul 15 04:44:27.122554 kubelet[3530]: I0715 04:44:27.122461 3530 scope.go:117] "RemoveContainer" containerID="1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590" Jul 15 04:44:27.123267 kubelet[3530]: I0715 04:44:27.123078 3530 scope.go:117] "RemoveContainer" containerID="b49bcb30f56d4edaf0a2e6f45dea838a9a63b5c68982e8251f895147e90f5342" Jul 15 04:44:27.124032 kubelet[3530]: E0715 04:44:27.123875 3530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-gpg5v_tigera-operator(64384fb1-3e95-47c3-ab73-40a2f87cd085)\"" pod="tigera-operator/tigera-operator-747864d56d-gpg5v" podUID="64384fb1-3e95-47c3-ab73-40a2f87cd085" Jul 15 04:44:27.126908 containerd[2000]: time="2025-07-15T04:44:27.126718401Z" level=info msg="RemoveContainer for \"1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590\"" Jul 15 04:44:27.138799 containerd[2000]: time="2025-07-15T04:44:27.138745737Z" level=info msg="RemoveContainer for \"1e4b5d67a2dded0ba1d8b85efa4395fe597b8277cc686e6e066bc0e01d4b6590\" returns successfully"