Aug 12 23:40:52.137300 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Aug 12 23:40:52.137360 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Aug 12 21:51:24 -00 2025 Aug 12 23:40:52.137387 kernel: KASLR disabled due to lack of seed Aug 12 23:40:52.137404 kernel: efi: EFI v2.7 by EDK II Aug 12 23:40:52.137424 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Aug 12 23:40:52.137440 kernel: secureboot: Secure boot disabled Aug 12 23:40:52.137458 kernel: ACPI: Early table checksum verification disabled Aug 12 23:40:52.137474 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Aug 12 23:40:52.137489 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Aug 12 23:40:52.137505 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 12 23:40:52.137520 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Aug 12 23:40:52.137540 kernel: ACPI: FACS 0x0000000078630000 000040 Aug 12 23:40:52.137555 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 12 23:40:52.137571 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Aug 12 23:40:52.137591 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Aug 12 23:40:52.137607 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Aug 12 23:40:52.137629 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 12 23:40:52.137645 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Aug 12 23:40:52.137661 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Aug 12 23:40:52.137677 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Aug 12 23:40:52.137693 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Aug 12 23:40:52.137709 kernel: printk: legacy bootconsole [uart0] enabled Aug 12 23:40:52.137725 kernel: ACPI: Use ACPI SPCR as default console: Yes Aug 12 23:40:52.137742 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Aug 12 23:40:52.137759 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Aug 12 23:40:52.137775 kernel: Zone ranges: Aug 12 23:40:52.137791 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 12 23:40:52.137812 kernel: DMA32 empty Aug 12 23:40:52.137828 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Aug 12 23:40:52.137843 kernel: Device empty Aug 12 23:40:52.137859 kernel: Movable zone start for each node Aug 12 23:40:52.137875 kernel: Early memory node ranges Aug 12 23:40:52.137891 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Aug 12 23:40:52.137927 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Aug 12 23:40:52.137951 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Aug 12 23:40:52.137969 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Aug 12 23:40:52.137985 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Aug 12 23:40:52.138001 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Aug 12 23:40:52.138017 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Aug 12 23:40:52.138039 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Aug 12 23:40:52.138063 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Aug 12 23:40:52.138080 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Aug 12 23:40:52.138097 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Aug 12 23:40:52.138114 kernel: psci: probing for conduit method from ACPI. Aug 12 23:40:52.138136 kernel: psci: PSCIv1.0 detected in firmware. Aug 12 23:40:52.138152 kernel: psci: Using standard PSCI v0.2 function IDs Aug 12 23:40:52.138170 kernel: psci: Trusted OS migration not required Aug 12 23:40:52.138269 kernel: psci: SMC Calling Convention v1.1 Aug 12 23:40:52.138288 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Aug 12 23:40:52.138305 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Aug 12 23:40:52.138322 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Aug 12 23:40:52.138340 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 12 23:40:52.138357 kernel: Detected PIPT I-cache on CPU0 Aug 12 23:40:52.138374 kernel: CPU features: detected: GIC system register CPU interface Aug 12 23:40:52.138391 kernel: CPU features: detected: Spectre-v2 Aug 12 23:40:52.138415 kernel: CPU features: detected: Spectre-v3a Aug 12 23:40:52.138433 kernel: CPU features: detected: Spectre-BHB Aug 12 23:40:52.138449 kernel: CPU features: detected: ARM erratum 1742098 Aug 12 23:40:52.138466 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Aug 12 23:40:52.138483 kernel: alternatives: applying boot alternatives Aug 12 23:40:52.138502 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ce82f1ef836ba8581e59ce9db4eef4240d287b2b5f9937c28f0cd024f4dc9107 Aug 12 23:40:52.138521 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:40:52.138538 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:40:52.138555 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:40:52.138572 kernel: Fallback order for Node 0: 0 Aug 12 23:40:52.138593 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Aug 12 23:40:52.138613 kernel: Policy zone: Normal Aug 12 23:40:52.138630 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:40:52.138647 kernel: software IO TLB: area num 2. Aug 12 23:40:52.138664 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Aug 12 23:40:52.138681 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 12 23:40:52.138698 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:40:52.138716 kernel: rcu: RCU event tracing is enabled. Aug 12 23:40:52.138734 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 12 23:40:52.138752 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:40:52.138769 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:40:52.138787 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:40:52.138809 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 12 23:40:52.138827 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 12 23:40:52.138845 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 12 23:40:52.138862 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 12 23:40:52.138879 kernel: GICv3: 96 SPIs implemented Aug 12 23:40:52.138896 kernel: GICv3: 0 Extended SPIs implemented Aug 12 23:40:52.138914 kernel: Root IRQ handler: gic_handle_irq Aug 12 23:40:52.138931 kernel: GICv3: GICv3 features: 16 PPIs Aug 12 23:40:52.138948 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Aug 12 23:40:52.138965 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Aug 12 23:40:52.138982 kernel: ITS [mem 0x10080000-0x1009ffff] Aug 12 23:40:52.138999 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Aug 12 23:40:52.139023 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Aug 12 23:40:52.139042 kernel: GICv3: using LPI property table @0x0000000400110000 Aug 12 23:40:52.139059 kernel: ITS: Using hypervisor restricted LPI range [128] Aug 12 23:40:52.139077 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Aug 12 23:40:52.139094 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:40:52.139111 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Aug 12 23:40:52.139129 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Aug 12 23:40:52.139146 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Aug 12 23:40:52.139164 kernel: Console: colour dummy device 80x25 Aug 12 23:40:52.141274 kernel: printk: legacy console [tty1] enabled Aug 12 23:40:52.141306 kernel: ACPI: Core revision 20240827 Aug 12 23:40:52.141335 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Aug 12 23:40:52.141352 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:40:52.141370 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 12 23:40:52.141387 kernel: landlock: Up and running. Aug 12 23:40:52.141405 kernel: SELinux: Initializing. Aug 12 23:40:52.141422 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:40:52.141439 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:40:52.141457 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:40:52.141475 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:40:52.141498 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 12 23:40:52.141514 kernel: Remapping and enabling EFI services. Aug 12 23:40:52.141531 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:40:52.141548 kernel: Detected PIPT I-cache on CPU1 Aug 12 23:40:52.141566 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Aug 12 23:40:52.141583 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Aug 12 23:40:52.141600 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Aug 12 23:40:52.141617 kernel: smp: Brought up 1 node, 2 CPUs Aug 12 23:40:52.141635 kernel: SMP: Total of 2 processors activated. Aug 12 23:40:52.141665 kernel: CPU: All CPU(s) started at EL1 Aug 12 23:40:52.141683 kernel: CPU features: detected: 32-bit EL0 Support Aug 12 23:40:52.141705 kernel: CPU features: detected: 32-bit EL1 Support Aug 12 23:40:52.141724 kernel: CPU features: detected: CRC32 instructions Aug 12 23:40:52.141741 kernel: alternatives: applying system-wide alternatives Aug 12 23:40:52.141760 kernel: Memory: 3796516K/4030464K available (11136K kernel code, 2436K rwdata, 9080K rodata, 39488K init, 1038K bss, 212600K reserved, 16384K cma-reserved) Aug 12 23:40:52.141778 kernel: devtmpfs: initialized Aug 12 23:40:52.141800 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:40:52.141819 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 12 23:40:52.141837 kernel: 16912 pages in range for non-PLT usage Aug 12 23:40:52.141855 kernel: 508432 pages in range for PLT usage Aug 12 23:40:52.141873 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:40:52.141892 kernel: SMBIOS 3.0.0 present. Aug 12 23:40:52.141929 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Aug 12 23:40:52.141951 kernel: DMI: Memory slots populated: 0/0 Aug 12 23:40:52.141970 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:40:52.141994 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 12 23:40:52.142013 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 12 23:40:52.142031 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 12 23:40:52.142049 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:40:52.142068 kernel: audit: type=2000 audit(0.236:1): state=initialized audit_enabled=0 res=1 Aug 12 23:40:52.142087 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:40:52.142106 kernel: cpuidle: using governor menu Aug 12 23:40:52.142124 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 12 23:40:52.142142 kernel: ASID allocator initialised with 65536 entries Aug 12 23:40:52.142166 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:40:52.142271 kernel: Serial: AMBA PL011 UART driver Aug 12 23:40:52.142293 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:40:52.142311 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 12 23:40:52.142330 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 12 23:40:52.142349 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 12 23:40:52.142367 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:40:52.142385 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:40:52.142404 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 12 23:40:52.142430 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 12 23:40:52.142450 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:40:52.142468 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:40:52.142488 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:40:52.142507 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:40:52.142526 kernel: ACPI: Interpreter enabled Aug 12 23:40:52.142546 kernel: ACPI: Using GIC for interrupt routing Aug 12 23:40:52.142565 kernel: ACPI: MCFG table detected, 1 entries Aug 12 23:40:52.142584 kernel: ACPI: CPU0 has been hot-added Aug 12 23:40:52.142608 kernel: ACPI: CPU1 has been hot-added Aug 12 23:40:52.142627 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Aug 12 23:40:52.142986 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:40:52.144288 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 12 23:40:52.144531 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 12 23:40:52.144722 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Aug 12 23:40:52.144920 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Aug 12 23:40:52.144955 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Aug 12 23:40:52.144975 kernel: acpiphp: Slot [1] registered Aug 12 23:40:52.144993 kernel: acpiphp: Slot [2] registered Aug 12 23:40:52.145012 kernel: acpiphp: Slot [3] registered Aug 12 23:40:52.145030 kernel: acpiphp: Slot [4] registered Aug 12 23:40:52.145047 kernel: acpiphp: Slot [5] registered Aug 12 23:40:52.145066 kernel: acpiphp: Slot [6] registered Aug 12 23:40:52.145084 kernel: acpiphp: Slot [7] registered Aug 12 23:40:52.145102 kernel: acpiphp: Slot [8] registered Aug 12 23:40:52.145120 kernel: acpiphp: Slot [9] registered Aug 12 23:40:52.145142 kernel: acpiphp: Slot [10] registered Aug 12 23:40:52.145160 kernel: acpiphp: Slot [11] registered Aug 12 23:40:52.147272 kernel: acpiphp: Slot [12] registered Aug 12 23:40:52.147312 kernel: acpiphp: Slot [13] registered Aug 12 23:40:52.147330 kernel: acpiphp: Slot [14] registered Aug 12 23:40:52.147349 kernel: acpiphp: Slot [15] registered Aug 12 23:40:52.147367 kernel: acpiphp: Slot [16] registered Aug 12 23:40:52.147385 kernel: acpiphp: Slot [17] registered Aug 12 23:40:52.147404 kernel: acpiphp: Slot [18] registered Aug 12 23:40:52.147430 kernel: acpiphp: Slot [19] registered Aug 12 23:40:52.147449 kernel: acpiphp: Slot [20] registered Aug 12 23:40:52.147467 kernel: acpiphp: Slot [21] registered Aug 12 23:40:52.147484 kernel: acpiphp: Slot [22] registered Aug 12 23:40:52.147502 kernel: acpiphp: Slot [23] registered Aug 12 23:40:52.147520 kernel: acpiphp: Slot [24] registered Aug 12 23:40:52.147538 kernel: acpiphp: Slot [25] registered Aug 12 23:40:52.147556 kernel: acpiphp: Slot [26] registered Aug 12 23:40:52.147574 kernel: acpiphp: Slot [27] registered Aug 12 23:40:52.147592 kernel: acpiphp: Slot [28] registered Aug 12 23:40:52.147615 kernel: acpiphp: Slot [29] registered Aug 12 23:40:52.147633 kernel: acpiphp: Slot [30] registered Aug 12 23:40:52.147651 kernel: acpiphp: Slot [31] registered Aug 12 23:40:52.147669 kernel: PCI host bridge to bus 0000:00 Aug 12 23:40:52.147908 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Aug 12 23:40:52.148087 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 12 23:40:52.149416 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Aug 12 23:40:52.149629 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Aug 12 23:40:52.149870 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Aug 12 23:40:52.154493 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Aug 12 23:40:52.154719 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Aug 12 23:40:52.154926 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Aug 12 23:40:52.155120 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Aug 12 23:40:52.155350 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 12 23:40:52.155579 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Aug 12 23:40:52.155775 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Aug 12 23:40:52.155973 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Aug 12 23:40:52.159999 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Aug 12 23:40:52.162789 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 12 23:40:52.163016 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Aug 12 23:40:52.163240 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Aug 12 23:40:52.163453 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Aug 12 23:40:52.163645 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Aug 12 23:40:52.163846 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Aug 12 23:40:52.164025 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Aug 12 23:40:52.164287 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 12 23:40:52.164466 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Aug 12 23:40:52.164497 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 12 23:40:52.164517 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 12 23:40:52.164535 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 12 23:40:52.164554 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 12 23:40:52.164571 kernel: iommu: Default domain type: Translated Aug 12 23:40:52.164589 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 12 23:40:52.164608 kernel: efivars: Registered efivars operations Aug 12 23:40:52.164625 kernel: vgaarb: loaded Aug 12 23:40:52.164643 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 12 23:40:52.164661 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:40:52.164683 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:40:52.164701 kernel: pnp: PnP ACPI init Aug 12 23:40:52.164897 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Aug 12 23:40:52.164924 kernel: pnp: PnP ACPI: found 1 devices Aug 12 23:40:52.164942 kernel: NET: Registered PF_INET protocol family Aug 12 23:40:52.164961 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:40:52.164979 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:40:52.164997 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:40:52.165021 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:40:52.165039 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 12 23:40:52.165057 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:40:52.165075 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:40:52.165093 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:40:52.165110 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:40:52.165128 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:40:52.165146 kernel: kvm [1]: HYP mode not available Aug 12 23:40:52.165163 kernel: Initialise system trusted keyrings Aug 12 23:40:52.165207 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:40:52.165252 kernel: Key type asymmetric registered Aug 12 23:40:52.165277 kernel: Asymmetric key parser 'x509' registered Aug 12 23:40:52.165295 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 12 23:40:52.165314 kernel: io scheduler mq-deadline registered Aug 12 23:40:52.165332 kernel: io scheduler kyber registered Aug 12 23:40:52.165350 kernel: io scheduler bfq registered Aug 12 23:40:52.165565 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Aug 12 23:40:52.165600 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 12 23:40:52.165620 kernel: ACPI: button: Power Button [PWRB] Aug 12 23:40:52.165639 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Aug 12 23:40:52.165657 kernel: ACPI: button: Sleep Button [SLPB] Aug 12 23:40:52.165676 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:40:52.165695 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 12 23:40:52.165927 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Aug 12 23:40:52.165958 kernel: printk: legacy console [ttyS0] disabled Aug 12 23:40:52.165977 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Aug 12 23:40:52.166002 kernel: printk: legacy console [ttyS0] enabled Aug 12 23:40:52.166021 kernel: printk: legacy bootconsole [uart0] disabled Aug 12 23:40:52.166040 kernel: thunder_xcv, ver 1.0 Aug 12 23:40:52.166081 kernel: thunder_bgx, ver 1.0 Aug 12 23:40:52.166104 kernel: nicpf, ver 1.0 Aug 12 23:40:52.166123 kernel: nicvf, ver 1.0 Aug 12 23:40:52.166385 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 12 23:40:52.166579 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-12T23:40:51 UTC (1755042051) Aug 12 23:40:52.166613 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 12 23:40:52.166654 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Aug 12 23:40:52.166679 kernel: watchdog: NMI not fully supported Aug 12 23:40:52.166699 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:40:52.166719 kernel: watchdog: Hard watchdog permanently disabled Aug 12 23:40:52.166738 kernel: Segment Routing with IPv6 Aug 12 23:40:52.166758 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:40:52.166777 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:40:52.166796 kernel: Key type dns_resolver registered Aug 12 23:40:52.166821 kernel: registered taskstats version 1 Aug 12 23:40:52.166841 kernel: Loading compiled-in X.509 certificates Aug 12 23:40:52.166864 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: e74bfacfa68399ed7282bf533dd5901fdb84b882' Aug 12 23:40:52.166882 kernel: Demotion targets for Node 0: null Aug 12 23:40:52.166900 kernel: Key type .fscrypt registered Aug 12 23:40:52.166918 kernel: Key type fscrypt-provisioning registered Aug 12 23:40:52.166936 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:40:52.166955 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:40:52.166973 kernel: ima: No architecture policies found Aug 12 23:40:52.166995 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 12 23:40:52.167014 kernel: clk: Disabling unused clocks Aug 12 23:40:52.167032 kernel: PM: genpd: Disabling unused power domains Aug 12 23:40:52.167051 kernel: Warning: unable to open an initial console. Aug 12 23:40:52.167070 kernel: Freeing unused kernel memory: 39488K Aug 12 23:40:52.167088 kernel: Run /init as init process Aug 12 23:40:52.167107 kernel: with arguments: Aug 12 23:40:52.167125 kernel: /init Aug 12 23:40:52.167142 kernel: with environment: Aug 12 23:40:52.167160 kernel: HOME=/ Aug 12 23:40:52.167392 kernel: TERM=linux Aug 12 23:40:52.167416 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:40:52.167437 systemd[1]: Successfully made /usr/ read-only. Aug 12 23:40:52.167462 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:40:52.167483 systemd[1]: Detected virtualization amazon. Aug 12 23:40:52.167502 systemd[1]: Detected architecture arm64. Aug 12 23:40:52.167521 systemd[1]: Running in initrd. Aug 12 23:40:52.167548 systemd[1]: No hostname configured, using default hostname. Aug 12 23:40:52.167569 systemd[1]: Hostname set to . Aug 12 23:40:52.167588 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:40:52.167607 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:40:52.167626 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:40:52.167646 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:40:52.167667 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:40:52.167687 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:40:52.167712 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:40:52.167733 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:40:52.167755 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:40:52.167775 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:40:52.167794 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:40:52.167814 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:40:52.167834 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:40:52.167858 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:40:52.167877 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:40:52.167897 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:40:52.167916 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:40:52.167936 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:40:52.167957 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:40:52.167976 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 12 23:40:52.167996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:40:52.168020 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:40:52.168041 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:40:52.168060 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:40:52.168080 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:40:52.168100 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:40:52.168120 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:40:52.168140 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 12 23:40:52.168160 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:40:52.168218 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:40:52.168277 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:40:52.168300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:40:52.168320 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:40:52.168342 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:40:52.168368 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:40:52.168390 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:40:52.168411 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:40:52.168430 kernel: Bridge firewalling registered Aug 12 23:40:52.168512 systemd-journald[258]: Collecting audit messages is disabled. Aug 12 23:40:52.168574 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:40:52.168601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:40:52.168623 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:40:52.168645 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:40:52.168667 systemd-journald[258]: Journal started Aug 12 23:40:52.168708 systemd-journald[258]: Runtime Journal (/run/log/journal/ec231657880e7b392be241035d550d54) is 8M, max 75.3M, 67.3M free. Aug 12 23:40:52.083363 systemd-modules-load[259]: Inserted module 'overlay' Aug 12 23:40:52.175832 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:40:52.123012 systemd-modules-load[259]: Inserted module 'br_netfilter' Aug 12 23:40:52.182633 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:40:52.193892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:40:52.203239 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:40:52.215284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:40:52.244381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:40:52.246455 systemd-tmpfiles[277]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 12 23:40:52.261311 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:40:52.266610 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:40:52.275422 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:40:52.283355 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:40:52.327853 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ce82f1ef836ba8581e59ce9db4eef4240d287b2b5f9937c28f0cd024f4dc9107 Aug 12 23:40:52.392075 systemd-resolved[300]: Positive Trust Anchors: Aug 12 23:40:52.392109 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:40:52.392193 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:40:52.491212 kernel: SCSI subsystem initialized Aug 12 23:40:52.499218 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:40:52.512219 kernel: iscsi: registered transport (tcp) Aug 12 23:40:52.534924 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:40:52.534999 kernel: QLogic iSCSI HBA Driver Aug 12 23:40:52.570368 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:40:52.606254 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:40:52.614911 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:40:52.675298 kernel: random: crng init done Aug 12 23:40:52.676338 systemd-resolved[300]: Defaulting to hostname 'linux'. Aug 12 23:40:52.683883 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:40:52.688576 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:40:52.710493 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:40:52.718542 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:40:52.804243 kernel: raid6: neonx8 gen() 6587 MB/s Aug 12 23:40:52.821211 kernel: raid6: neonx4 gen() 6590 MB/s Aug 12 23:40:52.838210 kernel: raid6: neonx2 gen() 5472 MB/s Aug 12 23:40:52.855211 kernel: raid6: neonx1 gen() 3962 MB/s Aug 12 23:40:52.872210 kernel: raid6: int64x8 gen() 3670 MB/s Aug 12 23:40:52.889215 kernel: raid6: int64x4 gen() 3714 MB/s Aug 12 23:40:52.906215 kernel: raid6: int64x2 gen() 3613 MB/s Aug 12 23:40:52.924261 kernel: raid6: int64x1 gen() 2771 MB/s Aug 12 23:40:52.924301 kernel: raid6: using algorithm neonx4 gen() 6590 MB/s Aug 12 23:40:52.943215 kernel: raid6: .... xor() 4855 MB/s, rmw enabled Aug 12 23:40:52.943266 kernel: raid6: using neon recovery algorithm Aug 12 23:40:52.952012 kernel: xor: measuring software checksum speed Aug 12 23:40:52.952089 kernel: 8regs : 12999 MB/sec Aug 12 23:40:52.954516 kernel: 32regs : 12067 MB/sec Aug 12 23:40:52.954552 kernel: arm64_neon : 9211 MB/sec Aug 12 23:40:52.954577 kernel: xor: using function: 8regs (12999 MB/sec) Aug 12 23:40:53.048235 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:40:53.059520 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:40:53.066871 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:40:53.130531 systemd-udevd[508]: Using default interface naming scheme 'v255'. Aug 12 23:40:53.142808 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:40:53.147396 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:40:53.187236 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Aug 12 23:40:53.231996 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:40:53.236868 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:40:53.383970 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:40:53.392291 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:40:53.567920 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 12 23:40:53.568013 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Aug 12 23:40:53.574895 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 12 23:40:53.575268 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 12 23:40:53.577622 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:40:53.581248 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:40:53.588216 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 12 23:40:53.588257 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 12 23:40:53.588715 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:40:53.596208 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:36:b7:bc:44:cf Aug 12 23:40:53.600578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:40:53.606041 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:40:53.610640 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 12 23:40:53.624313 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:40:53.624385 kernel: GPT:9289727 != 16777215 Aug 12 23:40:53.625675 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:40:53.626504 kernel: GPT:9289727 != 16777215 Aug 12 23:40:53.627702 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:40:53.628708 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 12 23:40:53.636674 (udev-worker)[562]: Network interface NamePolicy= disabled on kernel command line. Aug 12 23:40:53.661844 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:40:53.683288 kernel: nvme nvme0: using unchecked data buffer Aug 12 23:40:53.794758 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 12 23:40:53.838652 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 12 23:40:53.841682 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 12 23:40:53.853966 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:40:53.891644 disk-uuid[683]: Primary Header is updated. Aug 12 23:40:53.891644 disk-uuid[683]: Secondary Entries is updated. Aug 12 23:40:53.891644 disk-uuid[683]: Secondary Header is updated. Aug 12 23:40:53.911242 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 12 23:40:53.919834 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:40:53.932011 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:40:53.937431 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 12 23:40:53.935990 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:40:53.945316 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:40:53.951951 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:40:54.002298 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:40:54.190260 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 12 23:40:54.452770 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 12 23:40:54.926225 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 12 23:40:54.928340 disk-uuid[684]: The operation has completed successfully. Aug 12 23:40:55.132076 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:40:55.134260 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:40:55.215703 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:40:55.244754 sh[956]: Success Aug 12 23:40:55.273864 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:40:55.273991 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:40:55.276211 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 12 23:40:55.287233 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Aug 12 23:40:55.413189 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:40:55.421143 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:40:55.450272 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:40:55.482664 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 12 23:40:55.482744 kernel: BTRFS: device fsid 7658cdd8-2ee4-4f84-82be-1f808605c89c devid 1 transid 42 /dev/mapper/usr (254:0) scanned by mount (991) Aug 12 23:40:55.487803 kernel: BTRFS info (device dm-0): first mount of filesystem 7658cdd8-2ee4-4f84-82be-1f808605c89c Aug 12 23:40:55.487871 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:40:55.487898 kernel: BTRFS info (device dm-0): using free-space-tree Aug 12 23:40:55.550480 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:40:55.554712 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 12 23:40:55.559811 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:40:55.565668 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:40:55.576815 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:40:55.626658 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1016) Aug 12 23:40:55.631205 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:40:55.631280 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:40:55.631308 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 12 23:40:55.647242 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:40:55.651303 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:40:55.657112 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:40:55.789617 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:40:55.797722 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:40:55.887829 systemd-networkd[1162]: lo: Link UP Aug 12 23:40:55.894126 systemd-networkd[1162]: lo: Gained carrier Aug 12 23:40:55.904802 systemd-networkd[1162]: Enumeration completed Aug 12 23:40:55.905321 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:40:55.911818 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:40:55.911838 systemd-networkd[1162]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:40:55.916913 systemd[1]: Reached target network.target - Network. Aug 12 23:40:55.929671 systemd-networkd[1162]: eth0: Link UP Aug 12 23:40:55.929683 systemd-networkd[1162]: eth0: Gained carrier Aug 12 23:40:55.929705 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:40:55.946292 systemd-networkd[1162]: eth0: DHCPv4 address 172.31.28.88/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 12 23:40:55.992018 ignition[1073]: Ignition 2.21.0 Aug 12 23:40:55.992931 ignition[1073]: Stage: fetch-offline Aug 12 23:40:55.995129 ignition[1073]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:40:55.995154 ignition[1073]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 12 23:40:55.996041 ignition[1073]: Ignition finished successfully Aug 12 23:40:56.005475 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:40:56.013419 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 12 23:40:56.056987 ignition[1172]: Ignition 2.21.0 Aug 12 23:40:56.058798 ignition[1172]: Stage: fetch Aug 12 23:40:56.060601 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:40:56.060631 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 12 23:40:56.060912 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 12 23:40:56.085554 ignition[1172]: PUT result: OK Aug 12 23:40:56.090109 ignition[1172]: parsed url from cmdline: "" Aug 12 23:40:56.090133 ignition[1172]: no config URL provided Aug 12 23:40:56.090149 ignition[1172]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:40:56.090198 ignition[1172]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:40:56.090265 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 12 23:40:56.094738 ignition[1172]: PUT result: OK Aug 12 23:40:56.094869 ignition[1172]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 12 23:40:56.108431 ignition[1172]: GET result: OK Aug 12 23:40:56.109924 ignition[1172]: parsing config with SHA512: 764632b0981348a07fd206a6077eec83f775cfe56e82afde9d3ee3689303146d4566adf6802df7dd5e6b0c27a02f450b7c8d5433c78c90ef0327dd83aced6ab6 Aug 12 23:40:56.123315 unknown[1172]: fetched base config from "system" Aug 12 23:40:56.123338 unknown[1172]: fetched base config from "system" Aug 12 23:40:56.125588 ignition[1172]: fetch: fetch complete Aug 12 23:40:56.123352 unknown[1172]: fetched user config from "aws" Aug 12 23:40:56.125602 ignition[1172]: fetch: fetch passed Aug 12 23:40:56.133713 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 12 23:40:56.125706 ignition[1172]: Ignition finished successfully Aug 12 23:40:56.141823 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:40:56.202828 ignition[1178]: Ignition 2.21.0 Aug 12 23:40:56.202860 ignition[1178]: Stage: kargs Aug 12 23:40:56.204525 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:40:56.204558 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 12 23:40:56.204712 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 12 23:40:56.210841 ignition[1178]: PUT result: OK Aug 12 23:40:56.220796 ignition[1178]: kargs: kargs passed Aug 12 23:40:56.220947 ignition[1178]: Ignition finished successfully Aug 12 23:40:56.226107 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:40:56.233324 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:40:56.276768 ignition[1185]: Ignition 2.21.0 Aug 12 23:40:56.276798 ignition[1185]: Stage: disks Aug 12 23:40:56.277741 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:40:56.277765 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 12 23:40:56.277922 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 12 23:40:56.280542 ignition[1185]: PUT result: OK Aug 12 23:40:56.294774 ignition[1185]: disks: disks passed Aug 12 23:40:56.297136 ignition[1185]: Ignition finished successfully Aug 12 23:40:56.301263 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:40:56.306936 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:40:56.312671 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:40:56.323600 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:40:56.328356 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:40:56.331212 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:40:56.338809 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:40:56.400292 systemd-fsck[1194]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 12 23:40:56.403951 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:40:56.412767 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:40:56.539219 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d634334e-91a3-4b77-89ab-775bdd78a572 r/w with ordered data mode. Quota mode: none. Aug 12 23:40:56.540258 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:40:56.544391 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:40:56.551399 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:40:56.563325 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:40:56.565699 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 12 23:40:56.565780 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:40:56.565830 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:40:56.592321 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:40:56.594578 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:40:56.613219 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1213) Aug 12 23:40:56.618371 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:40:56.619515 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:40:56.619550 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 12 23:40:56.628374 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:40:56.707228 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:40:56.720162 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:40:56.732582 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:40:56.743238 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:40:56.900615 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:40:56.912040 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:40:56.930642 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:40:56.947770 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:40:56.951970 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:40:57.000239 ignition[1326]: INFO : Ignition 2.21.0 Aug 12 23:40:57.000239 ignition[1326]: INFO : Stage: mount Aug 12 23:40:57.009109 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:40:57.009109 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 12 23:40:57.009109 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 12 23:40:57.009109 ignition[1326]: INFO : PUT result: OK Aug 12 23:40:57.027747 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:40:57.038253 ignition[1326]: INFO : mount: mount passed Aug 12 23:40:57.038253 ignition[1326]: INFO : Ignition finished successfully Aug 12 23:40:57.042378 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:40:57.056145 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:40:57.544581 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:40:57.554236 systemd-networkd[1162]: eth0: Gained IPv6LL Aug 12 23:40:57.581222 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1337) Aug 12 23:40:57.585388 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:40:57.585447 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:40:57.585474 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 12 23:40:57.597285 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:40:57.652894 ignition[1354]: INFO : Ignition 2.21.0 Aug 12 23:40:57.655017 ignition[1354]: INFO : Stage: files Aug 12 23:40:57.657895 ignition[1354]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:40:57.657895 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 12 23:40:57.662778 ignition[1354]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 12 23:40:57.666999 ignition[1354]: INFO : PUT result: OK Aug 12 23:40:57.672073 ignition[1354]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:40:57.675233 ignition[1354]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:40:57.675233 ignition[1354]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:40:57.685225 ignition[1354]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:40:57.688514 ignition[1354]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:40:57.692026 unknown[1354]: wrote ssh authorized keys file for user: core Aug 12 23:40:57.695304 ignition[1354]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:40:57.700048 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 12 23:40:57.705132 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 12 23:40:57.789095 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:40:58.099060 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 12 23:40:58.104510 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:40:58.104510 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:40:58.104510 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:40:58.104510 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:40:58.104510 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:40:58.104510 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:40:58.104510 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:40:58.104510 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:40:58.135822 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:40:58.135822 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:40:58.135822 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:40:58.135822 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:40:58.135822 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:40:58.135822 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 12 23:40:58.454345 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 12 23:40:58.821139 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:40:58.821139 ignition[1354]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 12 23:40:58.829144 ignition[1354]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:40:58.829144 ignition[1354]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:40:58.829144 ignition[1354]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 12 23:40:58.829144 ignition[1354]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:40:58.829144 ignition[1354]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:40:58.829144 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:40:58.829144 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:40:58.829144 ignition[1354]: INFO : files: files passed Aug 12 23:40:58.829144 ignition[1354]: INFO : Ignition finished successfully Aug 12 23:40:58.846242 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:40:58.851479 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:40:58.865764 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:40:58.906852 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:40:58.907318 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:40:58.923551 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:40:58.928421 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:40:58.931880 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:40:58.937486 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:40:58.945896 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:40:58.950862 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:40:59.048448 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:40:59.049105 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:40:59.056899 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:40:59.059426 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:40:59.066038 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:40:59.067455 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:40:59.107746 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:40:59.111522 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:40:59.152960 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:40:59.158287 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:40:59.161641 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:40:59.167847 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:40:59.168632 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:40:59.175635 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:40:59.180292 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:40:59.183902 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:40:59.187303 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:40:59.191702 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:40:59.199508 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 12 23:40:59.204732 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:40:59.207756 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:40:59.215127 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:40:59.220015 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:40:59.222530 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:40:59.228323 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:40:59.228889 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:40:59.235967 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:40:59.238572 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:40:59.242116 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:40:59.246879 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:40:59.253728 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:40:59.254000 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:40:59.263400 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:40:59.263802 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:40:59.269122 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:40:59.269716 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:40:59.276355 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:40:59.290508 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:40:59.295216 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:40:59.297780 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:40:59.301205 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:40:59.301473 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:40:59.320780 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:40:59.321255 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:40:59.353154 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:40:59.363464 ignition[1407]: INFO : Ignition 2.21.0 Aug 12 23:40:59.363464 ignition[1407]: INFO : Stage: umount Aug 12 23:40:59.370102 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:40:59.370102 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 12 23:40:59.370102 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 12 23:40:59.370102 ignition[1407]: INFO : PUT result: OK Aug 12 23:40:59.385617 ignition[1407]: INFO : umount: umount passed Aug 12 23:40:59.372648 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:40:59.375479 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:40:59.397204 ignition[1407]: INFO : Ignition finished successfully Aug 12 23:40:59.390373 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:40:59.390620 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:40:59.396511 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:40:59.396981 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:40:59.403525 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:40:59.403623 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:40:59.405385 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 12 23:40:59.405465 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 12 23:40:59.411628 systemd[1]: Stopped target network.target - Network. Aug 12 23:40:59.415301 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:40:59.415395 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:40:59.418273 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:40:59.424417 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:40:59.428291 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:40:59.430955 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:40:59.433432 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:40:59.440418 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:40:59.440495 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:40:59.447502 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:40:59.447569 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:40:59.452790 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:40:59.452889 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:40:59.455670 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:40:59.455750 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:40:59.460452 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:40:59.460540 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:40:59.463213 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:40:59.470514 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:40:59.502227 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:40:59.502958 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:40:59.511091 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 12 23:40:59.512392 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 12 23:40:59.522098 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:40:59.522203 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:40:59.533336 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:40:59.539348 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:40:59.539488 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:40:59.542351 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:40:59.545822 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:40:59.549957 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:40:59.572835 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 12 23:40:59.580525 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:40:59.580733 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:40:59.590927 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:40:59.591028 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:40:59.602343 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:40:59.602453 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:40:59.610269 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:40:59.610409 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:40:59.610973 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:40:59.611302 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:40:59.616266 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:40:59.616403 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:40:59.619098 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:40:59.619192 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:40:59.621700 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:40:59.621837 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:40:59.627712 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:40:59.627815 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:40:59.633429 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:40:59.633543 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:40:59.646705 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:40:59.652814 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 12 23:40:59.652936 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:40:59.675708 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:40:59.678214 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:40:59.692915 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 12 23:40:59.693033 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:40:59.701224 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:40:59.701353 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:40:59.708373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:40:59.708497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:40:59.713129 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 12 23:40:59.713554 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 12 23:40:59.713639 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 12 23:40:59.713725 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:40:59.714607 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:40:59.716859 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:40:59.725768 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:40:59.725965 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:40:59.730301 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:40:59.744452 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:40:59.796834 systemd[1]: Switching root. Aug 12 23:40:59.870006 systemd-journald[258]: Journal stopped Aug 12 23:41:02.101058 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Aug 12 23:41:02.101234 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:41:02.101280 kernel: SELinux: policy capability open_perms=1 Aug 12 23:41:02.101311 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:41:02.101342 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:41:02.101370 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:41:02.101400 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:41:02.101439 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:41:02.101478 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:41:02.101514 kernel: SELinux: policy capability userspace_initial_context=0 Aug 12 23:41:02.101543 kernel: audit: type=1403 audit(1755042060.190:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:41:02.101573 systemd[1]: Successfully loaded SELinux policy in 63.807ms. Aug 12 23:41:02.101647 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.513ms. Aug 12 23:41:02.101680 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:41:02.101713 systemd[1]: Detected virtualization amazon. Aug 12 23:41:02.101743 systemd[1]: Detected architecture arm64. Aug 12 23:41:02.101774 systemd[1]: Detected first boot. Aug 12 23:41:02.101805 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:41:02.101865 zram_generator::config[1451]: No configuration found. Aug 12 23:41:02.101908 kernel: NET: Registered PF_VSOCK protocol family Aug 12 23:41:02.101940 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:41:02.101971 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 12 23:41:02.102000 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:41:02.102028 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:41:02.102058 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:41:02.102091 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:41:02.102127 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:41:02.102157 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:41:02.110260 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:41:02.110318 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:41:02.110351 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:41:02.110383 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:41:02.110411 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:41:02.110444 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:41:02.110473 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:41:02.110512 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:41:02.110542 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:41:02.110571 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:41:02.110599 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:41:02.110626 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 12 23:41:02.110655 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:41:02.110686 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:41:02.110721 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:41:02.110750 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:41:02.110778 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:41:02.110806 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:41:02.110836 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:41:02.110865 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:41:02.110893 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:41:02.110924 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:41:02.110954 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:41:02.110987 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:41:02.111020 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 12 23:41:02.111050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:41:02.111080 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:41:02.111109 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:41:02.111137 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:41:02.111166 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:41:02.111231 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:41:02.111266 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:41:02.111298 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:41:02.111335 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:41:02.111365 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:41:02.111397 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:41:02.111431 systemd[1]: Reached target machines.target - Containers. Aug 12 23:41:02.111459 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:41:02.111487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:41:02.111516 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:41:02.111544 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:41:02.111576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:41:02.111603 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:41:02.111631 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:41:02.111660 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:41:02.111688 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:41:02.111718 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:41:02.111745 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:41:02.111772 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:41:02.111803 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:41:02.111830 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:41:02.111861 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:41:02.111893 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:41:02.111920 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:41:02.111946 kernel: fuse: init (API version 7.41) Aug 12 23:41:02.111977 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:41:02.112012 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:41:02.112044 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 12 23:41:02.112075 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:41:02.112109 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:41:02.112141 systemd[1]: Stopped verity-setup.service. Aug 12 23:41:02.112169 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:41:02.117935 kernel: loop: module loaded Aug 12 23:41:02.117972 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:41:02.118002 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:41:02.118032 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:41:02.118060 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:41:02.118089 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:41:02.118126 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:41:02.118155 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:41:02.118239 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:41:02.118274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:41:02.118303 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:41:02.118334 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:41:02.118363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:41:02.118393 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:41:02.118422 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:41:02.118592 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:41:02.118628 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:41:02.118658 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:41:02.118690 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:41:02.118720 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:41:02.118753 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:41:02.118789 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:41:02.118818 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:41:02.118850 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:41:02.118884 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:41:02.118919 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:41:02.118949 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:41:02.118977 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:41:02.119008 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:41:02.119041 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:41:02.119069 kernel: ACPI: bus type drm_connector registered Aug 12 23:41:02.119097 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 12 23:41:02.119125 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:41:02.119155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:41:02.119246 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:41:02.119285 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:41:02.119323 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:41:02.119359 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:41:02.119388 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:41:02.119478 systemd-journald[1530]: Collecting audit messages is disabled. Aug 12 23:41:02.119537 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:41:02.119567 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 12 23:41:02.119595 systemd-journald[1530]: Journal started Aug 12 23:41:02.119646 systemd-journald[1530]: Runtime Journal (/run/log/journal/ec231657880e7b392be241035d550d54) is 8M, max 75.3M, 67.3M free. Aug 12 23:41:01.290884 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:41:02.128658 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:41:01.306927 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 12 23:41:01.307767 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:41:02.129542 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:41:02.157028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:41:02.170249 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:41:02.193415 kernel: loop0: detected capacity change from 0 to 61240 Aug 12 23:41:02.203867 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:41:02.223350 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:41:02.231274 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 12 23:41:02.238386 systemd-journald[1530]: Time spent on flushing to /var/log/journal/ec231657880e7b392be241035d550d54 is 136.004ms for 936 entries. Aug 12 23:41:02.238386 systemd-journald[1530]: System Journal (/var/log/journal/ec231657880e7b392be241035d550d54) is 8M, max 195.6M, 187.6M free. Aug 12 23:41:02.397251 systemd-journald[1530]: Received client request to flush runtime journal. Aug 12 23:41:02.399949 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:41:02.400027 kernel: loop1: detected capacity change from 0 to 138376 Aug 12 23:41:02.251496 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Aug 12 23:41:02.251521 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Aug 12 23:41:02.298499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:41:02.307061 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:41:02.326678 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:41:02.330131 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 12 23:41:02.384319 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:41:02.408308 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:41:02.421296 kernel: loop2: detected capacity change from 0 to 107312 Aug 12 23:41:02.481843 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:41:02.490535 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:41:02.505884 kernel: loop3: detected capacity change from 0 to 207008 Aug 12 23:41:02.539797 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Aug 12 23:41:02.539842 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Aug 12 23:41:02.548501 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:41:02.620243 kernel: loop4: detected capacity change from 0 to 61240 Aug 12 23:41:02.641299 kernel: loop5: detected capacity change from 0 to 138376 Aug 12 23:41:02.673232 kernel: loop6: detected capacity change from 0 to 107312 Aug 12 23:41:02.707386 kernel: loop7: detected capacity change from 0 to 207008 Aug 12 23:41:02.743014 (sd-merge)[1612]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 12 23:41:02.745472 (sd-merge)[1612]: Merged extensions into '/usr'. Aug 12 23:41:02.757907 systemd[1]: Reload requested from client PID 1564 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:41:02.758144 systemd[1]: Reloading... Aug 12 23:41:02.984229 zram_generator::config[1641]: No configuration found. Aug 12 23:41:03.096312 ldconfig[1557]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:41:03.242243 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:41:03.438873 systemd[1]: Reloading finished in 679 ms. Aug 12 23:41:03.468268 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:41:03.472842 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:41:03.497854 systemd[1]: Starting ensure-sysext.service... Aug 12 23:41:03.505552 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:41:03.540257 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:41:03.555709 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:41:03.558906 systemd[1]: Reload requested from client PID 1690 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:41:03.558923 systemd[1]: Reloading... Aug 12 23:41:03.572362 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 12 23:41:03.574836 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 12 23:41:03.576617 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:41:03.577120 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:41:03.582396 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:41:03.582973 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Aug 12 23:41:03.583100 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Aug 12 23:41:03.599658 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:41:03.599873 systemd-tmpfiles[1692]: Skipping /boot Aug 12 23:41:03.631612 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:41:03.631816 systemd-tmpfiles[1692]: Skipping /boot Aug 12 23:41:03.710546 systemd-udevd[1695]: Using default interface naming scheme 'v255'. Aug 12 23:41:03.759917 zram_generator::config[1726]: No configuration found. Aug 12 23:41:04.058642 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:41:04.123432 (udev-worker)[1746]: Network interface NamePolicy= disabled on kernel command line. Aug 12 23:41:04.344753 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 12 23:41:04.345358 systemd[1]: Reloading finished in 785 ms. Aug 12 23:41:04.364925 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:41:04.394223 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:41:04.438428 systemd[1]: Finished ensure-sysext.service. Aug 12 23:41:04.454126 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:41:04.462129 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:41:04.465143 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:41:04.471564 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:41:04.482146 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:41:04.509199 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:41:04.515707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:41:04.518379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:41:04.518451 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:41:04.523670 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:41:04.530829 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:41:04.540562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:41:04.543353 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:41:04.550607 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:41:04.554670 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:41:04.555624 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:41:04.568690 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:41:04.569827 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:41:04.609957 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:41:04.614300 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:41:04.617406 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:41:04.658601 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:41:04.671699 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:41:04.674285 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:41:04.677590 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:41:04.712946 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:41:04.728525 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:41:04.753960 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:41:04.759768 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:41:04.767036 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:41:04.845328 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:41:04.848735 augenrules[1928]: No rules Aug 12 23:41:04.851429 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:41:04.851925 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:41:05.000867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:41:05.091944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 12 23:41:05.098463 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:41:05.150377 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:41:05.157478 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:41:05.249355 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:41:05.329511 systemd-networkd[1854]: lo: Link UP Aug 12 23:41:05.329532 systemd-networkd[1854]: lo: Gained carrier Aug 12 23:41:05.332623 systemd-networkd[1854]: Enumeration completed Aug 12 23:41:05.333081 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:41:05.333748 systemd-networkd[1854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:41:05.333756 systemd-networkd[1854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:41:05.342666 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 12 23:41:05.351480 systemd-networkd[1854]: eth0: Link UP Aug 12 23:41:05.351778 systemd-networkd[1854]: eth0: Gained carrier Aug 12 23:41:05.351830 systemd-networkd[1854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:41:05.352477 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:41:05.357786 systemd-resolved[1855]: Positive Trust Anchors: Aug 12 23:41:05.357834 systemd-resolved[1855]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:41:05.357900 systemd-resolved[1855]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:41:05.372289 systemd-networkd[1854]: eth0: DHCPv4 address 172.31.28.88/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 12 23:41:05.376609 systemd-resolved[1855]: Defaulting to hostname 'linux'. Aug 12 23:41:05.380286 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:41:05.385989 systemd[1]: Reached target network.target - Network. Aug 12 23:41:05.390552 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:41:05.393238 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:41:05.397925 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:41:05.403648 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:41:05.406870 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:41:05.409511 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:41:05.412489 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:41:05.415281 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:41:05.415343 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:41:05.417426 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:41:05.421330 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:41:05.427304 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:41:05.433747 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 12 23:41:05.437163 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 12 23:41:05.440197 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 12 23:41:05.451579 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:41:05.455580 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 12 23:41:05.461285 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 12 23:41:05.464725 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:41:05.468296 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:41:05.471028 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:41:05.473236 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:41:05.473432 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:41:05.475803 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:41:05.482374 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 12 23:41:05.487640 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:41:05.493018 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:41:05.498868 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:41:05.507532 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:41:05.508798 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:41:05.514772 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:41:05.522456 systemd[1]: Started ntpd.service - Network Time Service. Aug 12 23:41:05.529507 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 12 23:41:05.541460 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 12 23:41:05.547603 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:41:05.554374 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:41:05.569664 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:41:05.575659 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:41:05.576575 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:41:05.581876 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:41:05.618299 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:41:05.658117 jq[1978]: false Aug 12 23:41:05.664526 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:41:05.680477 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:41:05.682263 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:41:05.724170 extend-filesystems[1979]: Found /dev/nvme0n1p6 Aug 12 23:41:05.734885 tar[2004]: linux-arm64/LICENSE Aug 12 23:41:05.734885 tar[2004]: linux-arm64/helm Aug 12 23:41:05.739284 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:41:05.740294 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:41:05.758250 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:41:05.758791 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:41:05.764986 extend-filesystems[1979]: Found /dev/nvme0n1p9 Aug 12 23:41:05.773041 jq[1990]: true Aug 12 23:41:05.782743 extend-filesystems[1979]: Checking size of /dev/nvme0n1p9 Aug 12 23:41:05.813676 ntpd[1981]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 20:58:45 UTC 2025 (1): Starting Aug 12 23:41:05.815165 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 20:58:45 UTC 2025 (1): Starting Aug 12 23:41:05.815165 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 12 23:41:05.815165 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: ---------------------------------------------------- Aug 12 23:41:05.815165 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: ntp-4 is maintained by Network Time Foundation, Aug 12 23:41:05.815165 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 12 23:41:05.815165 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: corporation. Support and training for ntp-4 are Aug 12 23:41:05.815165 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: available at https://www.nwtime.org/support Aug 12 23:41:05.813731 ntpd[1981]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 12 23:41:05.813749 ntpd[1981]: ---------------------------------------------------- Aug 12 23:41:05.813767 ntpd[1981]: ntp-4 is maintained by Network Time Foundation, Aug 12 23:41:05.813783 ntpd[1981]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 12 23:41:05.813823 ntpd[1981]: corporation. Support and training for ntp-4 are Aug 12 23:41:05.813846 ntpd[1981]: available at https://www.nwtime.org/support Aug 12 23:41:05.813862 ntpd[1981]: ---------------------------------------------------- Aug 12 23:41:05.824832 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: ---------------------------------------------------- Aug 12 23:41:05.833310 (ntainerd)[2013]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:41:05.834170 ntpd[1981]: proto: precision = 0.096 usec (-23) Aug 12 23:41:05.843463 jq[2021]: true Aug 12 23:41:05.847303 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: proto: precision = 0.096 usec (-23) Aug 12 23:41:05.848378 ntpd[1981]: basedate set to 2025-07-31 Aug 12 23:41:05.848421 ntpd[1981]: gps base set to 2025-08-03 (week 2378) Aug 12 23:41:05.848684 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: basedate set to 2025-07-31 Aug 12 23:41:05.848684 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: gps base set to 2025-08-03 (week 2378) Aug 12 23:41:05.863333 dbus-daemon[1976]: [system] SELinux support is enabled Aug 12 23:41:05.865233 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:41:05.874013 coreos-metadata[1975]: Aug 12 23:41:05.873 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 12 23:41:05.875297 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:41:05.888541 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: Listen and drop on 0 v6wildcard [::]:123 Aug 12 23:41:05.888541 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 12 23:41:05.888541 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: Listen normally on 2 lo 127.0.0.1:123 Aug 12 23:41:05.888541 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: Listen normally on 3 eth0 172.31.28.88:123 Aug 12 23:41:05.888541 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: Listen normally on 4 lo [::1]:123 Aug 12 23:41:05.888541 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: bind(21) AF_INET6 fe80::436:b7ff:febc:44cf%2#123 flags 0x11 failed: Cannot assign requested address Aug 12 23:41:05.888541 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: unable to create socket on eth0 (5) for fe80::436:b7ff:febc:44cf%2#123 Aug 12 23:41:05.888541 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: failed to init interface for address fe80::436:b7ff:febc:44cf%2 Aug 12 23:41:05.888541 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: Listening on routing socket on fd #21 for interface updates Aug 12 23:41:05.883522 ntpd[1981]: Listen and drop on 0 v6wildcard [::]:123 Aug 12 23:41:05.875347 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:41:05.883602 ntpd[1981]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 12 23:41:05.878443 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:41:05.883849 ntpd[1981]: Listen normally on 2 lo 127.0.0.1:123 Aug 12 23:41:05.878477 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:41:05.883906 ntpd[1981]: Listen normally on 3 eth0 172.31.28.88:123 Aug 12 23:41:05.883969 ntpd[1981]: Listen normally on 4 lo [::1]:123 Aug 12 23:41:05.884034 ntpd[1981]: bind(21) AF_INET6 fe80::436:b7ff:febc:44cf%2#123 flags 0x11 failed: Cannot assign requested address Aug 12 23:41:05.884072 ntpd[1981]: unable to create socket on eth0 (5) for fe80::436:b7ff:febc:44cf%2#123 Aug 12 23:41:05.884096 ntpd[1981]: failed to init interface for address fe80::436:b7ff:febc:44cf%2 Aug 12 23:41:05.884146 ntpd[1981]: Listening on routing socket on fd #21 for interface updates Aug 12 23:41:05.921569 coreos-metadata[1975]: Aug 12 23:41:05.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 12 23:41:05.915651 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 12 23:41:05.925495 coreos-metadata[1975]: Aug 12 23:41:05.922 INFO Fetch successful Aug 12 23:41:05.925495 coreos-metadata[1975]: Aug 12 23:41:05.922 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 12 23:41:05.925495 coreos-metadata[1975]: Aug 12 23:41:05.925 INFO Fetch successful Aug 12 23:41:05.925495 coreos-metadata[1975]: Aug 12 23:41:05.925 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 12 23:41:05.925790 extend-filesystems[1979]: Resized partition /dev/nvme0n1p9 Aug 12 23:41:05.928896 dbus-daemon[1976]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1854 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 12 23:41:05.939976 coreos-metadata[1975]: Aug 12 23:41:05.936 INFO Fetch successful Aug 12 23:41:05.939976 coreos-metadata[1975]: Aug 12 23:41:05.936 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 12 23:41:05.941413 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 12 23:41:05.941413 ntpd[1981]: 12 Aug 23:41:05 ntpd[1981]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 12 23:41:05.936849 ntpd[1981]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 12 23:41:05.936599 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 12 23:41:05.936898 ntpd[1981]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 12 23:41:05.943564 coreos-metadata[1975]: Aug 12 23:41:05.943 INFO Fetch successful Aug 12 23:41:05.943564 coreos-metadata[1975]: Aug 12 23:41:05.943 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 12 23:41:05.948441 extend-filesystems[2038]: resize2fs 1.47.2 (1-Jan-2025) Aug 12 23:41:05.957602 coreos-metadata[1975]: Aug 12 23:41:05.953 INFO Fetch failed with 404: resource not found Aug 12 23:41:05.957602 coreos-metadata[1975]: Aug 12 23:41:05.953 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 12 23:41:05.962892 coreos-metadata[1975]: Aug 12 23:41:05.961 INFO Fetch successful Aug 12 23:41:05.962892 coreos-metadata[1975]: Aug 12 23:41:05.961 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 12 23:41:05.969463 coreos-metadata[1975]: Aug 12 23:41:05.967 INFO Fetch successful Aug 12 23:41:05.969463 coreos-metadata[1975]: Aug 12 23:41:05.967 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 12 23:41:05.973086 update_engine[1987]: I20250812 23:41:05.971879 1987 main.cc:92] Flatcar Update Engine starting Aug 12 23:41:05.973603 coreos-metadata[1975]: Aug 12 23:41:05.972 INFO Fetch successful Aug 12 23:41:05.973603 coreos-metadata[1975]: Aug 12 23:41:05.972 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 12 23:41:05.978809 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 12 23:41:05.978889 coreos-metadata[1975]: Aug 12 23:41:05.974 INFO Fetch successful Aug 12 23:41:05.978889 coreos-metadata[1975]: Aug 12 23:41:05.974 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 12 23:41:05.985885 coreos-metadata[1975]: Aug 12 23:41:05.982 INFO Fetch successful Aug 12 23:41:05.986419 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:41:05.994747 update_engine[1987]: I20250812 23:41:05.994598 1987 update_check_scheduler.cc:74] Next update check in 3m59s Aug 12 23:41:06.018807 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:41:06.120070 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 12 23:41:06.127221 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 12 23:41:06.130058 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:41:06.142623 extend-filesystems[2038]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 12 23:41:06.142623 extend-filesystems[2038]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 12 23:41:06.142623 extend-filesystems[2038]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 12 23:41:06.178548 extend-filesystems[1979]: Resized filesystem in /dev/nvme0n1p9 Aug 12 23:41:06.153240 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:41:06.183117 bash[2051]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:41:06.153750 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:41:06.160318 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:41:06.188121 systemd[1]: Starting sshkeys.service... Aug 12 23:41:06.224050 systemd-logind[1986]: Watching system buttons on /dev/input/event0 (Power Button) Aug 12 23:41:06.224095 systemd-logind[1986]: Watching system buttons on /dev/input/event1 (Sleep Button) Aug 12 23:41:06.224494 systemd-logind[1986]: New seat seat0. Aug 12 23:41:06.234693 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:41:06.262058 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 12 23:41:06.276548 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 12 23:41:06.482321 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:41:06.619736 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 12 23:41:06.641895 dbus-daemon[1976]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 12 23:41:06.660947 dbus-daemon[1976]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2040 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 12 23:41:06.673738 systemd[1]: Starting polkit.service - Authorization Manager... Aug 12 23:41:06.813124 locksmithd[2050]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:41:06.826511 ntpd[1981]: bind(24) AF_INET6 fe80::436:b7ff:febc:44cf%2#123 flags 0x11 failed: Cannot assign requested address Aug 12 23:41:06.830538 ntpd[1981]: 12 Aug 23:41:06 ntpd[1981]: bind(24) AF_INET6 fe80::436:b7ff:febc:44cf%2#123 flags 0x11 failed: Cannot assign requested address Aug 12 23:41:06.830538 ntpd[1981]: 12 Aug 23:41:06 ntpd[1981]: unable to create socket on eth0 (6) for fe80::436:b7ff:febc:44cf%2#123 Aug 12 23:41:06.830538 ntpd[1981]: 12 Aug 23:41:06 ntpd[1981]: failed to init interface for address fe80::436:b7ff:febc:44cf%2 Aug 12 23:41:06.826571 ntpd[1981]: unable to create socket on eth0 (6) for fe80::436:b7ff:febc:44cf%2#123 Aug 12 23:41:06.826597 ntpd[1981]: failed to init interface for address fe80::436:b7ff:febc:44cf%2 Aug 12 23:41:06.885897 coreos-metadata[2069]: Aug 12 23:41:06.885 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 12 23:41:06.893241 coreos-metadata[2069]: Aug 12 23:41:06.892 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 12 23:41:06.898459 coreos-metadata[2069]: Aug 12 23:41:06.898 INFO Fetch successful Aug 12 23:41:06.898459 coreos-metadata[2069]: Aug 12 23:41:06.898 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 12 23:41:06.903211 coreos-metadata[2069]: Aug 12 23:41:06.899 INFO Fetch successful Aug 12 23:41:06.908853 unknown[2069]: wrote ssh authorized keys file for user: core Aug 12 23:41:07.014499 update-ssh-keys[2172]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:41:07.022244 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 12 23:41:07.035546 systemd[1]: Finished sshkeys.service. Aug 12 23:41:07.051762 containerd[2013]: time="2025-08-12T23:41:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 12 23:41:07.060240 containerd[2013]: time="2025-08-12T23:41:07.058729137Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 12 23:41:07.113823 containerd[2013]: time="2025-08-12T23:41:07.113735097Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.352µs" Aug 12 23:41:07.113823 containerd[2013]: time="2025-08-12T23:41:07.113813865Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 12 23:41:07.114015 containerd[2013]: time="2025-08-12T23:41:07.113854965Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 12 23:41:07.114201 containerd[2013]: time="2025-08-12T23:41:07.114149709Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 12 23:41:07.118389 containerd[2013]: time="2025-08-12T23:41:07.118307565Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 12 23:41:07.118506 containerd[2013]: time="2025-08-12T23:41:07.118419813Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 12 23:41:07.118646 containerd[2013]: time="2025-08-12T23:41:07.118594317Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 12 23:41:07.118734 containerd[2013]: time="2025-08-12T23:41:07.118637889Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 12 23:41:07.119137 containerd[2013]: time="2025-08-12T23:41:07.119063133Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 12 23:41:07.119137 containerd[2013]: time="2025-08-12T23:41:07.119119197Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 12 23:41:07.119510 containerd[2013]: time="2025-08-12T23:41:07.119151897Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 12 23:41:07.119510 containerd[2013]: time="2025-08-12T23:41:07.119198229Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 12 23:41:07.119510 containerd[2013]: time="2025-08-12T23:41:07.119390721Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 12 23:41:07.120072 containerd[2013]: time="2025-08-12T23:41:07.119845845Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 12 23:41:07.120072 containerd[2013]: time="2025-08-12T23:41:07.119917869Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 12 23:41:07.120072 containerd[2013]: time="2025-08-12T23:41:07.119944041Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 12 23:41:07.120072 containerd[2013]: time="2025-08-12T23:41:07.120020037Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 12 23:41:07.129200 containerd[2013]: time="2025-08-12T23:41:07.128575197Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 12 23:41:07.129200 containerd[2013]: time="2025-08-12T23:41:07.128770713Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:41:07.140130 containerd[2013]: time="2025-08-12T23:41:07.139835733Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 12 23:41:07.140710 containerd[2013]: time="2025-08-12T23:41:07.140314929Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 12 23:41:07.140710 containerd[2013]: time="2025-08-12T23:41:07.140463177Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 12 23:41:07.140710 containerd[2013]: time="2025-08-12T23:41:07.140495877Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 12 23:41:07.140710 containerd[2013]: time="2025-08-12T23:41:07.140663073Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 12 23:41:07.142284 containerd[2013]: time="2025-08-12T23:41:07.141078237Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 12 23:41:07.142284 containerd[2013]: time="2025-08-12T23:41:07.141118485Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 12 23:41:07.142763 containerd[2013]: time="2025-08-12T23:41:07.142439493Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 12 23:41:07.142763 containerd[2013]: time="2025-08-12T23:41:07.142526949Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 12 23:41:07.142763 containerd[2013]: time="2025-08-12T23:41:07.142561065Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 12 23:41:07.142763 containerd[2013]: time="2025-08-12T23:41:07.142612617Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 12 23:41:07.142763 containerd[2013]: time="2025-08-12T23:41:07.142646145Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 12 23:41:07.143470 containerd[2013]: time="2025-08-12T23:41:07.143402757Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 12 23:41:07.143636 containerd[2013]: time="2025-08-12T23:41:07.143608545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 12 23:41:07.145494 containerd[2013]: time="2025-08-12T23:41:07.145368633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 12 23:41:07.145494 containerd[2013]: time="2025-08-12T23:41:07.145440837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 12 23:41:07.146258 containerd[2013]: time="2025-08-12T23:41:07.145470297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 12 23:41:07.146258 containerd[2013]: time="2025-08-12T23:41:07.145697841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 12 23:41:07.146258 containerd[2013]: time="2025-08-12T23:41:07.145759917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 12 23:41:07.146258 containerd[2013]: time="2025-08-12T23:41:07.145805325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 12 23:41:07.146258 containerd[2013]: time="2025-08-12T23:41:07.145878669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 12 23:41:07.146258 containerd[2013]: time="2025-08-12T23:41:07.145946949Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 12 23:41:07.146258 containerd[2013]: time="2025-08-12T23:41:07.145987953Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 12 23:41:07.148240 containerd[2013]: time="2025-08-12T23:41:07.147699705Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 12 23:41:07.149382 containerd[2013]: time="2025-08-12T23:41:07.148512933Z" level=info msg="Start snapshots syncer" Aug 12 23:41:07.149654 containerd[2013]: time="2025-08-12T23:41:07.149574717Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 12 23:41:07.150837 containerd[2013]: time="2025-08-12T23:41:07.150562533Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 12 23:41:07.150837 containerd[2013]: time="2025-08-12T23:41:07.150684981Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 12 23:41:07.154227 containerd[2013]: time="2025-08-12T23:41:07.154142505Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 12 23:41:07.154661 containerd[2013]: time="2025-08-12T23:41:07.154458345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 12 23:41:07.154661 containerd[2013]: time="2025-08-12T23:41:07.154605609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 12 23:41:07.154818 containerd[2013]: time="2025-08-12T23:41:07.154647105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 12 23:41:07.154818 containerd[2013]: time="2025-08-12T23:41:07.154688037Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 12 23:41:07.154818 containerd[2013]: time="2025-08-12T23:41:07.154730277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 12 23:41:07.154818 containerd[2013]: time="2025-08-12T23:41:07.154769817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 12 23:41:07.154818 containerd[2013]: time="2025-08-12T23:41:07.154802121Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 12 23:41:07.157219 containerd[2013]: time="2025-08-12T23:41:07.154891905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 12 23:41:07.157219 containerd[2013]: time="2025-08-12T23:41:07.154925265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 12 23:41:07.157219 containerd[2013]: time="2025-08-12T23:41:07.154969917Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 12 23:41:07.157219 containerd[2013]: time="2025-08-12T23:41:07.155063301Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 12 23:41:07.157219 containerd[2013]: time="2025-08-12T23:41:07.155108385Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 12 23:41:07.157486 containerd[2013]: time="2025-08-12T23:41:07.157442637Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 12 23:41:07.157544 containerd[2013]: time="2025-08-12T23:41:07.157502229Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 12 23:41:07.157591 containerd[2013]: time="2025-08-12T23:41:07.157536225Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 12 23:41:07.157591 containerd[2013]: time="2025-08-12T23:41:07.157574949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 12 23:41:07.157670 containerd[2013]: time="2025-08-12T23:41:07.157609545Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 12 23:41:07.162937 containerd[2013]: time="2025-08-12T23:41:07.162733329Z" level=info msg="runtime interface created" Aug 12 23:41:07.162937 containerd[2013]: time="2025-08-12T23:41:07.162919353Z" level=info msg="created NRI interface" Aug 12 23:41:07.163270 containerd[2013]: time="2025-08-12T23:41:07.163065933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 12 23:41:07.163270 containerd[2013]: time="2025-08-12T23:41:07.163220289Z" level=info msg="Connect containerd service" Aug 12 23:41:07.163613 containerd[2013]: time="2025-08-12T23:41:07.163442733Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:41:07.167220 containerd[2013]: time="2025-08-12T23:41:07.165566925Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:41:07.202031 polkitd[2151]: Started polkitd version 126 Aug 12 23:41:07.231232 polkitd[2151]: Loading rules from directory /etc/polkit-1/rules.d Aug 12 23:41:07.235284 polkitd[2151]: Loading rules from directory /run/polkit-1/rules.d Aug 12 23:41:07.235414 polkitd[2151]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 12 23:41:07.236099 polkitd[2151]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 12 23:41:07.236171 polkitd[2151]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 12 23:41:07.239718 polkitd[2151]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 12 23:41:07.244272 polkitd[2151]: Finished loading, compiling and executing 2 rules Aug 12 23:41:07.245492 systemd[1]: Started polkit.service - Authorization Manager. Aug 12 23:41:07.248653 dbus-daemon[1976]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 12 23:41:07.252678 polkitd[2151]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 12 23:41:07.273466 systemd-networkd[1854]: eth0: Gained IPv6LL Aug 12 23:41:07.297392 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:41:07.306911 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:41:07.317609 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 12 23:41:07.332736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:41:07.345765 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:41:07.376405 systemd-hostnamed[2040]: Hostname set to (transient) Aug 12 23:41:07.378757 systemd-resolved[1855]: System hostname changed to 'ip-172-31-28-88'. Aug 12 23:41:07.501038 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:41:07.575001 containerd[2013]: time="2025-08-12T23:41:07.574440443Z" level=info msg="Start subscribing containerd event" Aug 12 23:41:07.575001 containerd[2013]: time="2025-08-12T23:41:07.574543727Z" level=info msg="Start recovering state" Aug 12 23:41:07.575001 containerd[2013]: time="2025-08-12T23:41:07.574679939Z" level=info msg="Start event monitor" Aug 12 23:41:07.575001 containerd[2013]: time="2025-08-12T23:41:07.574718423Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:41:07.575001 containerd[2013]: time="2025-08-12T23:41:07.574736675Z" level=info msg="Start streaming server" Aug 12 23:41:07.575001 containerd[2013]: time="2025-08-12T23:41:07.574757087Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 12 23:41:07.575001 containerd[2013]: time="2025-08-12T23:41:07.574774775Z" level=info msg="runtime interface starting up..." Aug 12 23:41:07.575001 containerd[2013]: time="2025-08-12T23:41:07.574789379Z" level=info msg="starting plugins..." Aug 12 23:41:07.575001 containerd[2013]: time="2025-08-12T23:41:07.574819859Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 12 23:41:07.577454 containerd[2013]: time="2025-08-12T23:41:07.576922751Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:41:07.577454 containerd[2013]: time="2025-08-12T23:41:07.577031435Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:41:07.577809 containerd[2013]: time="2025-08-12T23:41:07.577738595Z" level=info msg="containerd successfully booted in 0.533786s" Aug 12 23:41:07.577879 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:41:07.587490 amazon-ssm-agent[2194]: Initializing new seelog logger Aug 12 23:41:07.593104 amazon-ssm-agent[2194]: New Seelog Logger Creation Complete Aug 12 23:41:07.593104 amazon-ssm-agent[2194]: 2025/08/12 23:41:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 12 23:41:07.593104 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 12 23:41:07.593104 amazon-ssm-agent[2194]: 2025/08/12 23:41:07 processing appconfig overrides Aug 12 23:41:07.599354 amazon-ssm-agent[2194]: 2025/08/12 23:41:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 12 23:41:07.599354 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 12 23:41:07.599354 amazon-ssm-agent[2194]: 2025/08/12 23:41:07 processing appconfig overrides Aug 12 23:41:07.599354 amazon-ssm-agent[2194]: 2025/08/12 23:41:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 12 23:41:07.599354 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 12 23:41:07.599354 amazon-ssm-agent[2194]: 2025/08/12 23:41:07 processing appconfig overrides Aug 12 23:41:07.603636 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.5977 INFO Proxy environment variables: Aug 12 23:41:07.611957 amazon-ssm-agent[2194]: 2025/08/12 23:41:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 12 23:41:07.611957 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 12 23:41:07.611957 amazon-ssm-agent[2194]: 2025/08/12 23:41:07 processing appconfig overrides Aug 12 23:41:07.704115 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.5978 INFO https_proxy: Aug 12 23:41:07.803744 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.5978 INFO http_proxy: Aug 12 23:41:07.902082 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.5978 INFO no_proxy: Aug 12 23:41:08.000487 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.5980 INFO Checking if agent identity type OnPrem can be assumed Aug 12 23:41:08.065406 sshd_keygen[2025]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:41:08.099885 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.5981 INFO Checking if agent identity type EC2 can be assumed Aug 12 23:41:08.118094 tar[2004]: linux-arm64/README.md Aug 12 23:41:08.156282 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:41:08.163620 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 12 23:41:08.177668 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:41:08.187857 systemd[1]: Started sshd@0-172.31.28.88:22-139.178.68.195:46938.service - OpenSSH per-connection server daemon (139.178.68.195:46938). Aug 12 23:41:08.198834 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.7193 INFO Agent will take identity from EC2 Aug 12 23:41:08.238161 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:41:08.239814 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:41:08.253824 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:41:08.299927 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.7231 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Aug 12 23:41:08.306850 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:41:08.319719 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:41:08.327170 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 12 23:41:08.332083 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:41:08.397695 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.7231 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Aug 12 23:41:08.477305 sshd[2232]: Accepted publickey for core from 139.178.68.195 port 46938 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:41:08.482302 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:41:08.496858 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.7231 INFO [amazon-ssm-agent] Starting Core Agent Aug 12 23:41:08.501320 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:41:08.508363 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:41:08.538542 systemd-logind[1986]: New session 1 of user core. Aug 12 23:41:08.565747 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:41:08.579531 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:41:08.600703 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.7231 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Aug 12 23:41:08.606935 (systemd)[2244]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:41:08.612669 systemd-logind[1986]: New session c1 of user core. Aug 12 23:41:08.701233 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.7231 INFO [Registrar] Starting registrar module Aug 12 23:41:08.756228 amazon-ssm-agent[2194]: 2025/08/12 23:41:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 12 23:41:08.756228 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 12 23:41:08.756228 amazon-ssm-agent[2194]: 2025/08/12 23:41:08 processing appconfig overrides Aug 12 23:41:08.786617 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.7303 INFO [EC2Identity] Checking disk for registration info Aug 12 23:41:08.786728 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.7304 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Aug 12 23:41:08.786728 amazon-ssm-agent[2194]: 2025-08-12 23:41:07.7304 INFO [EC2Identity] Generating registration keypair Aug 12 23:41:08.786728 amazon-ssm-agent[2194]: 2025-08-12 23:41:08.6978 INFO [EC2Identity] Checking write access before registering Aug 12 23:41:08.786728 amazon-ssm-agent[2194]: 2025-08-12 23:41:08.6997 INFO [EC2Identity] Registering EC2 instance with Systems Manager Aug 12 23:41:08.786728 amazon-ssm-agent[2194]: 2025-08-12 23:41:08.7531 INFO [EC2Identity] EC2 registration was successful. Aug 12 23:41:08.786943 amazon-ssm-agent[2194]: 2025-08-12 23:41:08.7532 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Aug 12 23:41:08.786943 amazon-ssm-agent[2194]: 2025-08-12 23:41:08.7533 INFO [CredentialRefresher] credentialRefresher has started Aug 12 23:41:08.786943 amazon-ssm-agent[2194]: 2025-08-12 23:41:08.7533 INFO [CredentialRefresher] Starting credentials refresher loop Aug 12 23:41:08.786943 amazon-ssm-agent[2194]: 2025-08-12 23:41:08.7862 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 12 23:41:08.786943 amazon-ssm-agent[2194]: 2025-08-12 23:41:08.7865 INFO [CredentialRefresher] Credentials ready Aug 12 23:41:08.801109 amazon-ssm-agent[2194]: 2025-08-12 23:41:08.7869 INFO [CredentialRefresher] Next credential rotation will be in 29.9999884226 minutes Aug 12 23:41:08.928788 systemd[2244]: Queued start job for default target default.target. Aug 12 23:41:08.937645 systemd[2244]: Created slice app.slice - User Application Slice. Aug 12 23:41:08.937913 systemd[2244]: Reached target paths.target - Paths. Aug 12 23:41:08.938234 systemd[2244]: Reached target timers.target - Timers. Aug 12 23:41:08.940974 systemd[2244]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:41:08.985701 systemd[2244]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:41:08.985974 systemd[2244]: Reached target sockets.target - Sockets. Aug 12 23:41:08.986203 systemd[2244]: Reached target basic.target - Basic System. Aug 12 23:41:08.986339 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:41:08.994357 systemd[2244]: Reached target default.target - Main User Target. Aug 12 23:41:08.994441 systemd[2244]: Startup finished in 367ms. Aug 12 23:41:09.002492 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:41:09.166623 systemd[1]: Started sshd@1-172.31.28.88:22-139.178.68.195:46940.service - OpenSSH per-connection server daemon (139.178.68.195:46940). Aug 12 23:41:09.365095 sshd[2255]: Accepted publickey for core from 139.178.68.195 port 46940 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:41:09.368238 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:41:09.377279 systemd-logind[1986]: New session 2 of user core. Aug 12 23:41:09.396520 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:41:09.532234 sshd[2257]: Connection closed by 139.178.68.195 port 46940 Aug 12 23:41:09.532320 sshd-session[2255]: pam_unix(sshd:session): session closed for user core Aug 12 23:41:09.541506 systemd[1]: sshd@1-172.31.28.88:22-139.178.68.195:46940.service: Deactivated successfully. Aug 12 23:41:09.544457 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:41:09.546784 systemd-logind[1986]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:41:09.551449 systemd-logind[1986]: Removed session 2. Aug 12 23:41:09.567624 systemd[1]: Started sshd@2-172.31.28.88:22-139.178.68.195:46948.service - OpenSSH per-connection server daemon (139.178.68.195:46948). Aug 12 23:41:09.769516 sshd[2263]: Accepted publickey for core from 139.178.68.195 port 46948 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:41:09.772674 sshd-session[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:41:09.782342 systemd-logind[1986]: New session 3 of user core. Aug 12 23:41:09.789546 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:41:09.824511 ntpd[1981]: Listen normally on 7 eth0 [fe80::436:b7ff:febc:44cf%2]:123 Aug 12 23:41:09.825232 ntpd[1981]: 12 Aug 23:41:09 ntpd[1981]: Listen normally on 7 eth0 [fe80::436:b7ff:febc:44cf%2]:123 Aug 12 23:41:09.829737 amazon-ssm-agent[2194]: 2025-08-12 23:41:09.8293 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 12 23:41:09.869533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:41:09.873470 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:41:09.878313 systemd[1]: Startup finished in 3.774s (kernel) + 8.502s (initrd) + 9.751s (userspace) = 22.028s. Aug 12 23:41:09.911785 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:41:09.934359 sshd[2267]: Connection closed by 139.178.68.195 port 46948 Aug 12 23:41:09.934851 amazon-ssm-agent[2194]: 2025-08-12 23:41:09.8352 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2270) started Aug 12 23:41:09.926533 sshd-session[2263]: pam_unix(sshd:session): session closed for user core Aug 12 23:41:09.945754 systemd[1]: sshd@2-172.31.28.88:22-139.178.68.195:46948.service: Deactivated successfully. Aug 12 23:41:09.951709 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:41:09.954682 systemd-logind[1986]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:41:09.958843 systemd-logind[1986]: Removed session 3. Aug 12 23:41:10.034717 amazon-ssm-agent[2194]: 2025-08-12 23:41:09.8353 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 12 23:41:11.153574 kubelet[2274]: E0812 23:41:11.153492 2274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:41:11.158238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:41:11.158558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:41:11.159199 systemd[1]: kubelet.service: Consumed 1.448s CPU time, 255.3M memory peak. Aug 12 23:41:12.980115 systemd-resolved[1855]: Clock change detected. Flushing caches. Aug 12 23:41:20.124804 systemd[1]: Started sshd@3-172.31.28.88:22-139.178.68.195:37850.service - OpenSSH per-connection server daemon (139.178.68.195:37850). Aug 12 23:41:20.329008 sshd[2300]: Accepted publickey for core from 139.178.68.195 port 37850 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:41:20.331540 sshd-session[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:41:20.339483 systemd-logind[1986]: New session 4 of user core. Aug 12 23:41:20.351530 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:41:20.476330 sshd[2302]: Connection closed by 139.178.68.195 port 37850 Aug 12 23:41:20.477471 sshd-session[2300]: pam_unix(sshd:session): session closed for user core Aug 12 23:41:20.484571 systemd[1]: sshd@3-172.31.28.88:22-139.178.68.195:37850.service: Deactivated successfully. Aug 12 23:41:20.488033 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:41:20.490185 systemd-logind[1986]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:41:20.493577 systemd-logind[1986]: Removed session 4. Aug 12 23:41:20.511776 systemd[1]: Started sshd@4-172.31.28.88:22-139.178.68.195:37862.service - OpenSSH per-connection server daemon (139.178.68.195:37862). Aug 12 23:41:20.710590 sshd[2308]: Accepted publickey for core from 139.178.68.195 port 37862 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:41:20.713037 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:41:20.723026 systemd-logind[1986]: New session 5 of user core. Aug 12 23:41:20.732545 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:41:20.852470 sshd[2310]: Connection closed by 139.178.68.195 port 37862 Aug 12 23:41:20.853565 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Aug 12 23:41:20.859730 systemd-logind[1986]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:41:20.859913 systemd[1]: sshd@4-172.31.28.88:22-139.178.68.195:37862.service: Deactivated successfully. Aug 12 23:41:20.862952 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:41:20.869032 systemd-logind[1986]: Removed session 5. Aug 12 23:41:20.893760 systemd[1]: Started sshd@5-172.31.28.88:22-139.178.68.195:37868.service - OpenSSH per-connection server daemon (139.178.68.195:37868). Aug 12 23:41:21.087472 sshd[2316]: Accepted publickey for core from 139.178.68.195 port 37868 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:41:21.090087 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:41:21.098475 systemd-logind[1986]: New session 6 of user core. Aug 12 23:41:21.112553 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:41:21.238278 sshd[2318]: Connection closed by 139.178.68.195 port 37868 Aug 12 23:41:21.239096 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Aug 12 23:41:21.245190 systemd-logind[1986]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:41:21.246397 systemd[1]: sshd@5-172.31.28.88:22-139.178.68.195:37868.service: Deactivated successfully. Aug 12 23:41:21.249163 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:41:21.253919 systemd-logind[1986]: Removed session 6. Aug 12 23:41:21.278022 systemd[1]: Started sshd@6-172.31.28.88:22-139.178.68.195:37882.service - OpenSSH per-connection server daemon (139.178.68.195:37882). Aug 12 23:41:21.447669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:41:21.451834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:41:21.482380 sshd[2324]: Accepted publickey for core from 139.178.68.195 port 37882 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:41:21.483960 sshd-session[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:41:21.496331 systemd-logind[1986]: New session 7 of user core. Aug 12 23:41:21.506553 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:41:21.629349 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:41:21.630565 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:41:21.648830 sudo[2330]: pam_unix(sudo:session): session closed for user root Aug 12 23:41:21.671906 sshd[2329]: Connection closed by 139.178.68.195 port 37882 Aug 12 23:41:21.673633 sshd-session[2324]: pam_unix(sshd:session): session closed for user core Aug 12 23:41:21.681005 systemd-logind[1986]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:41:21.681659 systemd[1]: sshd@6-172.31.28.88:22-139.178.68.195:37882.service: Deactivated successfully. Aug 12 23:41:21.685763 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:41:21.713424 systemd-logind[1986]: Removed session 7. Aug 12 23:41:21.715704 systemd[1]: Started sshd@7-172.31.28.88:22-139.178.68.195:37884.service - OpenSSH per-connection server daemon (139.178.68.195:37884). Aug 12 23:41:21.854137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:41:21.870218 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:41:21.921288 sshd[2336]: Accepted publickey for core from 139.178.68.195 port 37884 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:41:21.923647 sshd-session[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:41:21.933687 systemd-logind[1986]: New session 8 of user core. Aug 12 23:41:21.940023 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 12 23:41:21.961552 kubelet[2343]: E0812 23:41:21.961445 2343 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:41:21.969444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:41:21.969856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:41:21.970772 systemd[1]: kubelet.service: Consumed 326ms CPU time, 106.2M memory peak. Aug 12 23:41:22.049011 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:41:22.049714 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:41:22.059369 sudo[2352]: pam_unix(sudo:session): session closed for user root Aug 12 23:41:22.069426 sudo[2351]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 12 23:41:22.070076 sudo[2351]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:41:22.090846 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:41:22.152654 augenrules[2374]: No rules Aug 12 23:41:22.155649 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:41:22.156161 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:41:22.158594 sudo[2351]: pam_unix(sudo:session): session closed for user root Aug 12 23:41:22.182916 sshd[2349]: Connection closed by 139.178.68.195 port 37884 Aug 12 23:41:22.182038 sshd-session[2336]: pam_unix(sshd:session): session closed for user core Aug 12 23:41:22.187852 systemd[1]: sshd@7-172.31.28.88:22-139.178.68.195:37884.service: Deactivated successfully. Aug 12 23:41:22.191969 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:41:22.196478 systemd-logind[1986]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:41:22.198753 systemd-logind[1986]: Removed session 8. Aug 12 23:41:22.217750 systemd[1]: Started sshd@8-172.31.28.88:22-139.178.68.195:37890.service - OpenSSH per-connection server daemon (139.178.68.195:37890). Aug 12 23:41:22.419454 sshd[2383]: Accepted publickey for core from 139.178.68.195 port 37890 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:41:22.422607 sshd-session[2383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:41:22.430711 systemd-logind[1986]: New session 9 of user core. Aug 12 23:41:22.442521 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 12 23:41:22.543985 sudo[2386]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:41:22.545119 sudo[2386]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:41:23.140316 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 12 23:41:23.157762 (dockerd)[2403]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 12 23:41:23.572362 dockerd[2403]: time="2025-08-12T23:41:23.571526096Z" level=info msg="Starting up" Aug 12 23:41:23.575624 dockerd[2403]: time="2025-08-12T23:41:23.575202848Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 12 23:41:23.638652 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3374372724-merged.mount: Deactivated successfully. Aug 12 23:41:23.684582 dockerd[2403]: time="2025-08-12T23:41:23.684374217Z" level=info msg="Loading containers: start." Aug 12 23:41:23.701275 kernel: Initializing XFRM netlink socket Aug 12 23:41:24.035172 (udev-worker)[2424]: Network interface NamePolicy= disabled on kernel command line. Aug 12 23:41:24.112753 systemd-networkd[1854]: docker0: Link UP Aug 12 23:41:24.123030 dockerd[2403]: time="2025-08-12T23:41:24.122940871Z" level=info msg="Loading containers: done." Aug 12 23:41:24.156441 dockerd[2403]: time="2025-08-12T23:41:24.156217231Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:41:24.156938 dockerd[2403]: time="2025-08-12T23:41:24.156769147Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 12 23:41:24.157251 dockerd[2403]: time="2025-08-12T23:41:24.157177195Z" level=info msg="Initializing buildkit" Aug 12 23:41:24.210550 dockerd[2403]: time="2025-08-12T23:41:24.210164695Z" level=info msg="Completed buildkit initialization" Aug 12 23:41:24.226491 dockerd[2403]: time="2025-08-12T23:41:24.226415839Z" level=info msg="Daemon has completed initialization" Aug 12 23:41:24.226637 dockerd[2403]: time="2025-08-12T23:41:24.226540855Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:41:24.229382 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 12 23:41:24.627982 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1532154631-merged.mount: Deactivated successfully. Aug 12 23:41:25.582033 containerd[2013]: time="2025-08-12T23:41:25.581908714Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 12 23:41:26.196259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149886807.mount: Deactivated successfully. Aug 12 23:41:27.521708 containerd[2013]: time="2025-08-12T23:41:27.521645772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:27.524105 containerd[2013]: time="2025-08-12T23:41:27.524052636Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327781" Aug 12 23:41:27.526021 containerd[2013]: time="2025-08-12T23:41:27.525972264Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:27.531417 containerd[2013]: time="2025-08-12T23:41:27.531363756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:27.533492 containerd[2013]: time="2025-08-12T23:41:27.533430696Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 1.951450366s" Aug 12 23:41:27.533715 containerd[2013]: time="2025-08-12T23:41:27.533682180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Aug 12 23:41:27.535168 containerd[2013]: time="2025-08-12T23:41:27.535088448Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 12 23:41:28.903275 containerd[2013]: time="2025-08-12T23:41:28.902583399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:28.904965 containerd[2013]: time="2025-08-12T23:41:28.904903623Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529696" Aug 12 23:41:28.906279 containerd[2013]: time="2025-08-12T23:41:28.906175827Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:28.911959 containerd[2013]: time="2025-08-12T23:41:28.911146491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:28.913063 containerd[2013]: time="2025-08-12T23:41:28.913002159Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 1.377844399s" Aug 12 23:41:28.913173 containerd[2013]: time="2025-08-12T23:41:28.913060071Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Aug 12 23:41:28.913675 containerd[2013]: time="2025-08-12T23:41:28.913638051Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 12 23:41:30.074728 containerd[2013]: time="2025-08-12T23:41:30.074646732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:30.076399 containerd[2013]: time="2025-08-12T23:41:30.076306800Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484138" Aug 12 23:41:30.077850 containerd[2013]: time="2025-08-12T23:41:30.077776788Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:30.083853 containerd[2013]: time="2025-08-12T23:41:30.083765712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:30.084975 containerd[2013]: time="2025-08-12T23:41:30.084876072Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 1.170534557s" Aug 12 23:41:30.084975 containerd[2013]: time="2025-08-12T23:41:30.084934752Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Aug 12 23:41:30.086488 containerd[2013]: time="2025-08-12T23:41:30.086446428Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 12 23:41:31.346762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1554062692.mount: Deactivated successfully. Aug 12 23:41:31.958168 containerd[2013]: time="2025-08-12T23:41:31.958108842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:31.968006 containerd[2013]: time="2025-08-12T23:41:31.967932666Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378405" Aug 12 23:41:31.975519 containerd[2013]: time="2025-08-12T23:41:31.975424938Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:31.980689 containerd[2013]: time="2025-08-12T23:41:31.980296926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:31.980689 containerd[2013]: time="2025-08-12T23:41:31.980524110Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.893866026s" Aug 12 23:41:31.980689 containerd[2013]: time="2025-08-12T23:41:31.980562738Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Aug 12 23:41:31.981495 containerd[2013]: time="2025-08-12T23:41:31.981429198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:41:32.060571 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 12 23:41:32.064021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:41:32.460342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:41:32.475792 (kubelet)[2686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:41:32.561764 kubelet[2686]: E0812 23:41:32.561665 2686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:41:32.567775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:41:32.568095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:41:32.569148 systemd[1]: kubelet.service: Consumed 303ms CPU time, 105.5M memory peak. Aug 12 23:41:32.614317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088867742.mount: Deactivated successfully. Aug 12 23:41:33.943449 containerd[2013]: time="2025-08-12T23:41:33.942548528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:33.987355 containerd[2013]: time="2025-08-12T23:41:33.987278048Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Aug 12 23:41:34.000682 containerd[2013]: time="2025-08-12T23:41:33.999913244Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:34.005396 containerd[2013]: time="2025-08-12T23:41:34.005307232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:34.007701 containerd[2013]: time="2025-08-12T23:41:34.007481452Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.02598905s" Aug 12 23:41:34.007701 containerd[2013]: time="2025-08-12T23:41:34.007545592Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 12 23:41:34.008283 containerd[2013]: time="2025-08-12T23:41:34.008206180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:41:34.460558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974066476.mount: Deactivated successfully. Aug 12 23:41:34.467582 containerd[2013]: time="2025-08-12T23:41:34.467500854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:41:34.470027 containerd[2013]: time="2025-08-12T23:41:34.469952142Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Aug 12 23:41:34.471097 containerd[2013]: time="2025-08-12T23:41:34.471031386Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:41:34.475751 containerd[2013]: time="2025-08-12T23:41:34.475672290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:41:34.478537 containerd[2013]: time="2025-08-12T23:41:34.478463382Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 470.169074ms" Aug 12 23:41:34.478537 containerd[2013]: time="2025-08-12T23:41:34.478525806Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 12 23:41:34.479168 containerd[2013]: time="2025-08-12T23:41:34.479056878Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 12 23:41:35.013837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781386490.mount: Deactivated successfully. Aug 12 23:41:36.939461 containerd[2013]: time="2025-08-12T23:41:36.939371111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:36.941425 containerd[2013]: time="2025-08-12T23:41:36.941347727Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Aug 12 23:41:36.942790 containerd[2013]: time="2025-08-12T23:41:36.942693623Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:36.949319 containerd[2013]: time="2025-08-12T23:41:36.949198355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:41:36.955544 containerd[2013]: time="2025-08-12T23:41:36.955265027Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.476093933s" Aug 12 23:41:36.955544 containerd[2013]: time="2025-08-12T23:41:36.955339715Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Aug 12 23:41:37.549681 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 12 23:41:42.811389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 12 23:41:42.817554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:41:43.157514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:41:43.175345 (kubelet)[2834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:41:43.260932 kubelet[2834]: E0812 23:41:43.260873 2834 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:41:43.266133 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:41:43.266695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:41:43.267836 systemd[1]: kubelet.service: Consumed 302ms CPU time, 105.3M memory peak. Aug 12 23:41:43.405651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:41:43.406257 systemd[1]: kubelet.service: Consumed 302ms CPU time, 105.3M memory peak. Aug 12 23:41:43.410457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:41:43.467512 systemd[1]: Reload requested from client PID 2848 ('systemctl') (unit session-9.scope)... Aug 12 23:41:43.467552 systemd[1]: Reloading... Aug 12 23:41:43.702305 zram_generator::config[2895]: No configuration found. Aug 12 23:41:43.924964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:41:44.191064 systemd[1]: Reloading finished in 722 ms. Aug 12 23:41:44.284407 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 12 23:41:44.284797 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 12 23:41:44.286338 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:41:44.286586 systemd[1]: kubelet.service: Consumed 226ms CPU time, 95M memory peak. Aug 12 23:41:44.290853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:41:44.613194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:41:44.629035 (kubelet)[2956]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:41:44.702963 kubelet[2956]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:41:44.702963 kubelet[2956]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:41:44.704254 kubelet[2956]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:41:44.704254 kubelet[2956]: I0812 23:41:44.703707 2956 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:41:45.543187 kubelet[2956]: I0812 23:41:45.543092 2956 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 12 23:41:45.543187 kubelet[2956]: I0812 23:41:45.543177 2956 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:41:45.544212 kubelet[2956]: I0812 23:41:45.544159 2956 server.go:954] "Client rotation is on, will bootstrap in background" Aug 12 23:41:45.603818 kubelet[2956]: E0812 23:41:45.603769 2956 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:41:45.605523 kubelet[2956]: I0812 23:41:45.605484 2956 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:41:45.618215 kubelet[2956]: I0812 23:41:45.618182 2956 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 12 23:41:45.624122 kubelet[2956]: I0812 23:41:45.624085 2956 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:41:45.625875 kubelet[2956]: I0812 23:41:45.625812 2956 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:41:45.626418 kubelet[2956]: I0812 23:41:45.626049 2956 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:41:45.626838 kubelet[2956]: I0812 23:41:45.626814 2956 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:41:45.626942 kubelet[2956]: I0812 23:41:45.626925 2956 container_manager_linux.go:304] "Creating device plugin manager" Aug 12 23:41:45.627388 kubelet[2956]: I0812 23:41:45.627368 2956 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:41:45.633514 kubelet[2956]: I0812 23:41:45.633476 2956 kubelet.go:446] "Attempting to sync node with API server" Aug 12 23:41:45.634219 kubelet[2956]: I0812 23:41:45.633704 2956 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:41:45.634219 kubelet[2956]: I0812 23:41:45.633754 2956 kubelet.go:352] "Adding apiserver pod source" Aug 12 23:41:45.634219 kubelet[2956]: I0812 23:41:45.633775 2956 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:41:45.646189 kubelet[2956]: W0812 23:41:45.646094 2956 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-88&limit=500&resourceVersion=0": dial tcp 172.31.28.88:6443: connect: connection refused Aug 12 23:41:45.646361 kubelet[2956]: E0812 23:41:45.646205 2956 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-88&limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:41:45.649414 kubelet[2956]: W0812 23:41:45.649323 2956 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.88:6443: connect: connection refused Aug 12 23:41:45.650252 kubelet[2956]: E0812 23:41:45.649431 2956 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:41:45.650252 kubelet[2956]: I0812 23:41:45.649567 2956 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 12 23:41:45.650732 kubelet[2956]: I0812 23:41:45.650687 2956 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:41:45.651947 kubelet[2956]: W0812 23:41:45.651897 2956 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:41:45.654978 kubelet[2956]: I0812 23:41:45.654932 2956 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:41:45.655187 kubelet[2956]: I0812 23:41:45.655168 2956 server.go:1287] "Started kubelet" Aug 12 23:41:45.662962 kubelet[2956]: I0812 23:41:45.662872 2956 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:41:45.664926 kubelet[2956]: I0812 23:41:45.664864 2956 server.go:479] "Adding debug handlers to kubelet server" Aug 12 23:41:45.669259 kubelet[2956]: I0812 23:41:45.667923 2956 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:41:45.669259 kubelet[2956]: I0812 23:41:45.668436 2956 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:41:45.672186 kubelet[2956]: I0812 23:41:45.672124 2956 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:41:45.675606 kubelet[2956]: E0812 23:41:45.674542 2956 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.88:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.88:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-88.185b297f196d8916 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-88,UID:ip-172-31-28-88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-88,},FirstTimestamp:2025-08-12 23:41:45.655134486 +0000 UTC m=+1.019949858,LastTimestamp:2025-08-12 23:41:45.655134486 +0000 UTC m=+1.019949858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-88,}" Aug 12 23:41:45.676043 kubelet[2956]: I0812 23:41:45.676016 2956 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:41:45.680143 kubelet[2956]: E0812 23:41:45.680092 2956 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-88\" not found" Aug 12 23:41:45.681317 kubelet[2956]: I0812 23:41:45.681285 2956 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:41:45.681809 kubelet[2956]: I0812 23:41:45.681780 2956 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:41:45.683088 kubelet[2956]: I0812 23:41:45.683055 2956 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:41:45.684443 kubelet[2956]: W0812 23:41:45.684374 2956 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.88:6443: connect: connection refused Aug 12 23:41:45.684670 kubelet[2956]: E0812 23:41:45.684638 2956 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:41:45.685080 kubelet[2956]: I0812 23:41:45.685046 2956 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:41:45.685426 kubelet[2956]: I0812 23:41:45.685388 2956 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:41:45.686344 kubelet[2956]: E0812 23:41:45.686306 2956 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:41:45.688210 kubelet[2956]: E0812 23:41:45.688126 2956 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-88?timeout=10s\": dial tcp 172.31.28.88:6443: connect: connection refused" interval="200ms" Aug 12 23:41:45.688433 kubelet[2956]: I0812 23:41:45.688393 2956 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:41:45.720360 kubelet[2956]: I0812 23:41:45.720310 2956 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:41:45.720875 kubelet[2956]: I0812 23:41:45.720368 2956 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:41:45.720875 kubelet[2956]: I0812 23:41:45.720399 2956 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:41:45.722845 kubelet[2956]: I0812 23:41:45.722760 2956 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:41:45.725154 kubelet[2956]: I0812 23:41:45.725091 2956 policy_none.go:49] "None policy: Start" Aug 12 23:41:45.725154 kubelet[2956]: I0812 23:41:45.725130 2956 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:41:45.725154 kubelet[2956]: I0812 23:41:45.725155 2956 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:41:45.726213 kubelet[2956]: I0812 23:41:45.724981 2956 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:41:45.726213 kubelet[2956]: I0812 23:41:45.725629 2956 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 12 23:41:45.726213 kubelet[2956]: I0812 23:41:45.725664 2956 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:41:45.726213 kubelet[2956]: I0812 23:41:45.725681 2956 kubelet.go:2382] "Starting kubelet main sync loop" Aug 12 23:41:45.726213 kubelet[2956]: E0812 23:41:45.725745 2956 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:41:45.734531 kubelet[2956]: W0812 23:41:45.734430 2956 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.88:6443: connect: connection refused Aug 12 23:41:45.734655 kubelet[2956]: E0812 23:41:45.734550 2956 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:41:45.742498 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:41:45.766612 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:41:45.773630 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:41:45.781988 kubelet[2956]: E0812 23:41:45.781943 2956 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-88\" not found" Aug 12 23:41:45.794157 kubelet[2956]: I0812 23:41:45.794038 2956 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:41:45.794639 kubelet[2956]: I0812 23:41:45.794614 2956 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:41:45.794779 kubelet[2956]: I0812 23:41:45.794731 2956 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:41:45.797278 kubelet[2956]: I0812 23:41:45.797099 2956 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:41:45.798766 kubelet[2956]: E0812 23:41:45.798579 2956 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:41:45.798766 kubelet[2956]: E0812 23:41:45.798646 2956 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-88\" not found" Aug 12 23:41:45.850268 systemd[1]: Created slice kubepods-burstable-podd2cec5d9a90c6cb09f5662b33603866e.slice - libcontainer container kubepods-burstable-podd2cec5d9a90c6cb09f5662b33603866e.slice. Aug 12 23:41:45.865931 kubelet[2956]: E0812 23:41:45.865874 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Aug 12 23:41:45.869751 systemd[1]: Created slice kubepods-burstable-pod24137d7a5e21b008566ed1c8db3a972e.slice - libcontainer container kubepods-burstable-pod24137d7a5e21b008566ed1c8db3a972e.slice. Aug 12 23:41:45.885262 kubelet[2956]: I0812 23:41:45.884663 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24137d7a5e21b008566ed1c8db3a972e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"24137d7a5e21b008566ed1c8db3a972e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:45.885262 kubelet[2956]: I0812 23:41:45.884736 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d208ab98c97d71c2fca6835c9a5b238a-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-88\" (UID: \"d208ab98c97d71c2fca6835c9a5b238a\") " pod="kube-system/kube-scheduler-ip-172-31-28-88" Aug 12 23:41:45.885262 kubelet[2956]: I0812 23:41:45.884780 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2cec5d9a90c6cb09f5662b33603866e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-88\" (UID: \"d2cec5d9a90c6cb09f5662b33603866e\") " pod="kube-system/kube-apiserver-ip-172-31-28-88" Aug 12 23:41:45.885262 kubelet[2956]: I0812 23:41:45.884817 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24137d7a5e21b008566ed1c8db3a972e-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"24137d7a5e21b008566ed1c8db3a972e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:45.885262 kubelet[2956]: I0812 23:41:45.884854 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24137d7a5e21b008566ed1c8db3a972e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"24137d7a5e21b008566ed1c8db3a972e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:45.886349 kubelet[2956]: I0812 23:41:45.884889 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24137d7a5e21b008566ed1c8db3a972e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"24137d7a5e21b008566ed1c8db3a972e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:45.886349 kubelet[2956]: I0812 23:41:45.884923 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2cec5d9a90c6cb09f5662b33603866e-ca-certs\") pod \"kube-apiserver-ip-172-31-28-88\" (UID: \"d2cec5d9a90c6cb09f5662b33603866e\") " pod="kube-system/kube-apiserver-ip-172-31-28-88" Aug 12 23:41:45.886349 kubelet[2956]: I0812 23:41:45.884956 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2cec5d9a90c6cb09f5662b33603866e-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-88\" (UID: \"d2cec5d9a90c6cb09f5662b33603866e\") " pod="kube-system/kube-apiserver-ip-172-31-28-88" Aug 12 23:41:45.886349 kubelet[2956]: I0812 23:41:45.885005 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/24137d7a5e21b008566ed1c8db3a972e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"24137d7a5e21b008566ed1c8db3a972e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:45.886349 kubelet[2956]: E0812 23:41:45.885057 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Aug 12 23:41:45.889337 kubelet[2956]: E0812 23:41:45.889266 2956 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-88?timeout=10s\": dial tcp 172.31.28.88:6443: connect: connection refused" interval="400ms" Aug 12 23:41:45.892214 systemd[1]: Created slice kubepods-burstable-podd208ab98c97d71c2fca6835c9a5b238a.slice - libcontainer container kubepods-burstable-podd208ab98c97d71c2fca6835c9a5b238a.slice. Aug 12 23:41:45.898827 kubelet[2956]: E0812 23:41:45.898762 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Aug 12 23:41:45.899089 kubelet[2956]: I0812 23:41:45.899045 2956 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-88" Aug 12 23:41:45.899988 kubelet[2956]: E0812 23:41:45.899917 2956 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.88:6443/api/v1/nodes\": dial tcp 172.31.28.88:6443: connect: connection refused" node="ip-172-31-28-88" Aug 12 23:41:46.102769 kubelet[2956]: I0812 23:41:46.102705 2956 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-88" Aug 12 23:41:46.103309 kubelet[2956]: E0812 23:41:46.103241 2956 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.88:6443/api/v1/nodes\": dial tcp 172.31.28.88:6443: connect: connection refused" node="ip-172-31-28-88" Aug 12 23:41:46.167893 containerd[2013]: time="2025-08-12T23:41:46.167823112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-88,Uid:d2cec5d9a90c6cb09f5662b33603866e,Namespace:kube-system,Attempt:0,}" Aug 12 23:41:46.194279 containerd[2013]: time="2025-08-12T23:41:46.189947536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-88,Uid:24137d7a5e21b008566ed1c8db3a972e,Namespace:kube-system,Attempt:0,}" Aug 12 23:41:46.201794 containerd[2013]: time="2025-08-12T23:41:46.201743021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-88,Uid:d208ab98c97d71c2fca6835c9a5b238a,Namespace:kube-system,Attempt:0,}" Aug 12 23:41:46.205520 containerd[2013]: time="2025-08-12T23:41:46.205438277Z" level=info msg="connecting to shim 1b9ac83f17df9bfc1c1c6e5dfd27bde9e92af5d411187a7849ec88cd59787475" address="unix:///run/containerd/s/9c0579547b9680fea58a823586a17294eab946b59100aeba8b204b3a3f6d95f5" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:41:46.289611 systemd[1]: Started cri-containerd-1b9ac83f17df9bfc1c1c6e5dfd27bde9e92af5d411187a7849ec88cd59787475.scope - libcontainer container 1b9ac83f17df9bfc1c1c6e5dfd27bde9e92af5d411187a7849ec88cd59787475. Aug 12 23:41:46.298702 containerd[2013]: time="2025-08-12T23:41:46.298568261Z" level=info msg="connecting to shim dfa327baadda3764f237498798e196769889d732b46564b50924e98fe2033fe0" address="unix:///run/containerd/s/2b78382a9ca48531db966f8fa212d5ccdd2f8840046f68dc84878ad93e16ef36" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:41:46.298825 kubelet[2956]: E0812 23:41:46.298621 2956 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-88?timeout=10s\": dial tcp 172.31.28.88:6443: connect: connection refused" interval="800ms" Aug 12 23:41:46.301254 containerd[2013]: time="2025-08-12T23:41:46.300624545Z" level=info msg="connecting to shim fafc9c8446e34a4a09dfe2f98c5047cfb1b8a098cfa780911638ae4f3da658c3" address="unix:///run/containerd/s/4a10b7a2d55711425141f3c1f3cb083e387dc8b51bbbbb259c47eda2645aa850" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:41:46.374577 systemd[1]: Started cri-containerd-fafc9c8446e34a4a09dfe2f98c5047cfb1b8a098cfa780911638ae4f3da658c3.scope - libcontainer container fafc9c8446e34a4a09dfe2f98c5047cfb1b8a098cfa780911638ae4f3da658c3. Aug 12 23:41:46.397159 systemd[1]: Started cri-containerd-dfa327baadda3764f237498798e196769889d732b46564b50924e98fe2033fe0.scope - libcontainer container dfa327baadda3764f237498798e196769889d732b46564b50924e98fe2033fe0. Aug 12 23:41:46.434703 containerd[2013]: time="2025-08-12T23:41:46.434628150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-88,Uid:d2cec5d9a90c6cb09f5662b33603866e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b9ac83f17df9bfc1c1c6e5dfd27bde9e92af5d411187a7849ec88cd59787475\"" Aug 12 23:41:46.445798 containerd[2013]: time="2025-08-12T23:41:46.445546734Z" level=info msg="CreateContainer within sandbox \"1b9ac83f17df9bfc1c1c6e5dfd27bde9e92af5d411187a7849ec88cd59787475\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:41:46.467664 containerd[2013]: time="2025-08-12T23:41:46.467586918Z" level=info msg="Container 28cd8c76845b4077c436967fe53038cd1245f3b50077d1c884905ffdb14edc18: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:41:46.489811 containerd[2013]: time="2025-08-12T23:41:46.489754458Z" level=info msg="CreateContainer within sandbox \"1b9ac83f17df9bfc1c1c6e5dfd27bde9e92af5d411187a7849ec88cd59787475\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"28cd8c76845b4077c436967fe53038cd1245f3b50077d1c884905ffdb14edc18\"" Aug 12 23:41:46.491507 containerd[2013]: time="2025-08-12T23:41:46.490891170Z" level=info msg="StartContainer for \"28cd8c76845b4077c436967fe53038cd1245f3b50077d1c884905ffdb14edc18\"" Aug 12 23:41:46.493843 containerd[2013]: time="2025-08-12T23:41:46.493773846Z" level=info msg="connecting to shim 28cd8c76845b4077c436967fe53038cd1245f3b50077d1c884905ffdb14edc18" address="unix:///run/containerd/s/9c0579547b9680fea58a823586a17294eab946b59100aeba8b204b3a3f6d95f5" protocol=ttrpc version=3 Aug 12 23:41:46.510538 kubelet[2956]: I0812 23:41:46.510487 2956 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-88" Aug 12 23:41:46.512212 kubelet[2956]: E0812 23:41:46.512164 2956 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.88:6443/api/v1/nodes\": dial tcp 172.31.28.88:6443: connect: connection refused" node="ip-172-31-28-88" Aug 12 23:41:46.513328 containerd[2013]: time="2025-08-12T23:41:46.513036090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-88,Uid:d208ab98c97d71c2fca6835c9a5b238a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fafc9c8446e34a4a09dfe2f98c5047cfb1b8a098cfa780911638ae4f3da658c3\"" Aug 12 23:41:46.521660 containerd[2013]: time="2025-08-12T23:41:46.521577414Z" level=info msg="CreateContainer within sandbox \"fafc9c8446e34a4a09dfe2f98c5047cfb1b8a098cfa780911638ae4f3da658c3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:41:46.552602 containerd[2013]: time="2025-08-12T23:41:46.551604942Z" level=info msg="Container e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:41:46.561553 systemd[1]: Started cri-containerd-28cd8c76845b4077c436967fe53038cd1245f3b50077d1c884905ffdb14edc18.scope - libcontainer container 28cd8c76845b4077c436967fe53038cd1245f3b50077d1c884905ffdb14edc18. Aug 12 23:41:46.572124 containerd[2013]: time="2025-08-12T23:41:46.572067714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-88,Uid:24137d7a5e21b008566ed1c8db3a972e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfa327baadda3764f237498798e196769889d732b46564b50924e98fe2033fe0\"" Aug 12 23:41:46.576556 containerd[2013]: time="2025-08-12T23:41:46.576502086Z" level=info msg="CreateContainer within sandbox \"dfa327baadda3764f237498798e196769889d732b46564b50924e98fe2033fe0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:41:46.583528 containerd[2013]: time="2025-08-12T23:41:46.583275702Z" level=info msg="CreateContainer within sandbox \"fafc9c8446e34a4a09dfe2f98c5047cfb1b8a098cfa780911638ae4f3da658c3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b\"" Aug 12 23:41:46.590676 containerd[2013]: time="2025-08-12T23:41:46.590588874Z" level=info msg="StartContainer for \"e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b\"" Aug 12 23:41:46.603506 containerd[2013]: time="2025-08-12T23:41:46.602671567Z" level=info msg="connecting to shim e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b" address="unix:///run/containerd/s/4a10b7a2d55711425141f3c1f3cb083e387dc8b51bbbbb259c47eda2645aa850" protocol=ttrpc version=3 Aug 12 23:41:46.605469 containerd[2013]: time="2025-08-12T23:41:46.605323963Z" level=info msg="Container 6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:41:46.633918 containerd[2013]: time="2025-08-12T23:41:46.632684203Z" level=info msg="CreateContainer within sandbox \"dfa327baadda3764f237498798e196769889d732b46564b50924e98fe2033fe0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f\"" Aug 12 23:41:46.636353 containerd[2013]: time="2025-08-12T23:41:46.636287887Z" level=info msg="StartContainer for \"6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f\"" Aug 12 23:41:46.645253 containerd[2013]: time="2025-08-12T23:41:46.644658187Z" level=info msg="connecting to shim 6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f" address="unix:///run/containerd/s/2b78382a9ca48531db966f8fa212d5ccdd2f8840046f68dc84878ad93e16ef36" protocol=ttrpc version=3 Aug 12 23:41:46.654748 systemd[1]: Started cri-containerd-e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b.scope - libcontainer container e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b. Aug 12 23:41:46.678023 kubelet[2956]: W0812 23:41:46.677935 2956 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.88:6443: connect: connection refused Aug 12 23:41:46.678165 kubelet[2956]: E0812 23:41:46.678038 2956 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:41:46.705945 containerd[2013]: time="2025-08-12T23:41:46.705147055Z" level=info msg="StartContainer for \"28cd8c76845b4077c436967fe53038cd1245f3b50077d1c884905ffdb14edc18\" returns successfully" Aug 12 23:41:46.706568 systemd[1]: Started cri-containerd-6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f.scope - libcontainer container 6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f. Aug 12 23:41:46.712632 kubelet[2956]: W0812 23:41:46.712035 2956 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-88&limit=500&resourceVersion=0": dial tcp 172.31.28.88:6443: connect: connection refused Aug 12 23:41:46.712632 kubelet[2956]: E0812 23:41:46.712131 2956 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-88&limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:41:46.773511 kubelet[2956]: E0812 23:41:46.773380 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Aug 12 23:41:46.881019 containerd[2013]: time="2025-08-12T23:41:46.879713024Z" level=info msg="StartContainer for \"6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f\" returns successfully" Aug 12 23:41:46.891137 containerd[2013]: time="2025-08-12T23:41:46.889577924Z" level=info msg="StartContainer for \"e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b\" returns successfully" Aug 12 23:41:47.316465 kubelet[2956]: I0812 23:41:47.316427 2956 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-88" Aug 12 23:41:47.782380 kubelet[2956]: E0812 23:41:47.779791 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Aug 12 23:41:47.783988 kubelet[2956]: E0812 23:41:47.783936 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Aug 12 23:41:47.787940 kubelet[2956]: E0812 23:41:47.786617 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Aug 12 23:41:48.786940 kubelet[2956]: E0812 23:41:48.786684 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Aug 12 23:41:48.788815 kubelet[2956]: E0812 23:41:48.788162 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Aug 12 23:41:49.791305 kubelet[2956]: E0812 23:41:49.789882 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Aug 12 23:41:49.792602 kubelet[2956]: E0812 23:41:49.792349 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Aug 12 23:41:51.012044 update_engine[1987]: I20250812 23:41:51.011272 1987 update_attempter.cc:509] Updating boot flags... Aug 12 23:41:51.037112 kubelet[2956]: E0812 23:41:51.037053 2956 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Aug 12 23:41:51.080806 kubelet[2956]: I0812 23:41:51.080425 2956 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-88" Aug 12 23:41:51.089460 kubelet[2956]: I0812 23:41:51.088408 2956 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-88" Aug 12 23:41:51.275346 kubelet[2956]: E0812 23:41:51.274466 2956 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-88" Aug 12 23:41:51.275346 kubelet[2956]: I0812 23:41:51.274543 2956 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-88" Aug 12 23:41:51.310921 kubelet[2956]: E0812 23:41:51.309957 2956 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-88" Aug 12 23:41:51.311732 kubelet[2956]: I0812 23:41:51.311171 2956 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:51.346260 kubelet[2956]: E0812 23:41:51.345579 2956 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:51.654191 kubelet[2956]: I0812 23:41:51.654081 2956 apiserver.go:52] "Watching apiserver" Aug 12 23:41:51.686168 kubelet[2956]: I0812 23:41:51.683778 2956 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:41:52.239016 kubelet[2956]: I0812 23:41:52.238917 2956 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-88" Aug 12 23:41:54.112059 systemd[1]: Reload requested from client PID 3411 ('systemctl') (unit session-9.scope)... Aug 12 23:41:54.112085 systemd[1]: Reloading... Aug 12 23:41:54.384270 zram_generator::config[3458]: No configuration found. Aug 12 23:41:54.603406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:41:54.915044 systemd[1]: Reloading finished in 802 ms. Aug 12 23:41:54.976166 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:41:54.993140 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:41:54.995342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:41:54.995449 systemd[1]: kubelet.service: Consumed 1.824s CPU time, 127.8M memory peak. Aug 12 23:41:55.001949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:41:55.401314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:41:55.422153 (kubelet)[3515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:41:55.522181 kubelet[3515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:41:55.522181 kubelet[3515]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:41:55.522181 kubelet[3515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:41:55.522810 kubelet[3515]: I0812 23:41:55.522483 3515 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:41:55.537867 kubelet[3515]: I0812 23:41:55.537819 3515 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 12 23:41:55.538519 kubelet[3515]: I0812 23:41:55.538053 3515 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:41:55.538714 kubelet[3515]: I0812 23:41:55.538690 3515 server.go:954] "Client rotation is on, will bootstrap in background" Aug 12 23:41:55.546807 kubelet[3515]: I0812 23:41:55.546758 3515 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:41:55.556644 kubelet[3515]: I0812 23:41:55.556597 3515 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:41:55.575528 kubelet[3515]: I0812 23:41:55.575486 3515 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 12 23:41:55.583353 kubelet[3515]: I0812 23:41:55.583271 3515 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:41:55.585373 kubelet[3515]: I0812 23:41:55.583942 3515 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:41:55.585373 kubelet[3515]: I0812 23:41:55.583996 3515 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:41:55.585373 kubelet[3515]: I0812 23:41:55.584363 3515 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:41:55.585373 kubelet[3515]: I0812 23:41:55.584385 3515 container_manager_linux.go:304] "Creating device plugin manager" Aug 12 23:41:55.585681 kubelet[3515]: I0812 23:41:55.584473 3515 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:41:55.585681 kubelet[3515]: I0812 23:41:55.584711 3515 kubelet.go:446] "Attempting to sync node with API server" Aug 12 23:41:55.585681 kubelet[3515]: I0812 23:41:55.584736 3515 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:41:55.585681 kubelet[3515]: I0812 23:41:55.584778 3515 kubelet.go:352] "Adding apiserver pod source" Aug 12 23:41:55.585681 kubelet[3515]: I0812 23:41:55.584800 3515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:41:55.591253 kubelet[3515]: I0812 23:41:55.589973 3515 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 12 23:41:55.595026 kubelet[3515]: I0812 23:41:55.594955 3515 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:41:55.597670 kubelet[3515]: I0812 23:41:55.597617 3515 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:41:55.597793 kubelet[3515]: I0812 23:41:55.597684 3515 server.go:1287] "Started kubelet" Aug 12 23:41:55.616276 kubelet[3515]: I0812 23:41:55.616136 3515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:41:55.628268 kubelet[3515]: I0812 23:41:55.626970 3515 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:41:55.640248 kubelet[3515]: I0812 23:41:55.638467 3515 server.go:479] "Adding debug handlers to kubelet server" Aug 12 23:41:55.646002 kubelet[3515]: I0812 23:41:55.645912 3515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:41:55.656250 kubelet[3515]: I0812 23:41:55.654309 3515 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:41:55.656250 kubelet[3515]: I0812 23:41:55.655598 3515 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:41:55.656250 kubelet[3515]: E0812 23:41:55.655946 3515 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-88\" not found" Aug 12 23:41:55.666843 kubelet[3515]: I0812 23:41:55.666778 3515 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:41:55.671280 kubelet[3515]: I0812 23:41:55.670823 3515 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:41:55.671280 kubelet[3515]: I0812 23:41:55.671089 3515 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:41:55.687185 kubelet[3515]: I0812 23:41:55.684789 3515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:41:55.694394 kubelet[3515]: I0812 23:41:55.694338 3515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:41:55.694611 kubelet[3515]: I0812 23:41:55.694588 3515 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 12 23:41:55.694734 kubelet[3515]: I0812 23:41:55.694714 3515 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:41:55.694824 kubelet[3515]: I0812 23:41:55.694806 3515 kubelet.go:2382] "Starting kubelet main sync loop" Aug 12 23:41:55.695018 kubelet[3515]: E0812 23:41:55.694985 3515 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:41:55.699216 kubelet[3515]: I0812 23:41:55.699137 3515 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:41:55.699216 kubelet[3515]: I0812 23:41:55.699399 3515 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:41:55.707444 kubelet[3515]: E0812 23:41:55.707268 3515 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:41:55.708366 kubelet[3515]: I0812 23:41:55.707989 3515 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:41:55.795615 kubelet[3515]: E0812 23:41:55.795577 3515 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 12 23:41:55.829494 kubelet[3515]: I0812 23:41:55.829456 3515 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:41:55.829862 kubelet[3515]: I0812 23:41:55.829834 3515 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:41:55.830013 kubelet[3515]: I0812 23:41:55.829991 3515 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:41:55.832376 kubelet[3515]: I0812 23:41:55.832303 3515 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:41:55.832602 kubelet[3515]: I0812 23:41:55.832559 3515 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:41:55.833646 kubelet[3515]: I0812 23:41:55.833314 3515 policy_none.go:49] "None policy: Start" Aug 12 23:41:55.833646 kubelet[3515]: I0812 23:41:55.833344 3515 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:41:55.833646 kubelet[3515]: I0812 23:41:55.833370 3515 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:41:55.833646 kubelet[3515]: I0812 23:41:55.833591 3515 state_mem.go:75] "Updated machine memory state" Aug 12 23:41:55.847743 kubelet[3515]: I0812 23:41:55.847629 3515 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:41:55.848258 kubelet[3515]: I0812 23:41:55.848184 3515 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:41:55.848417 kubelet[3515]: I0812 23:41:55.848211 3515 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:41:55.850197 kubelet[3515]: I0812 23:41:55.850108 3515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:41:55.856674 kubelet[3515]: E0812 23:41:55.856599 3515 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:41:55.976356 kubelet[3515]: I0812 23:41:55.975596 3515 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-88" Aug 12 23:41:55.995875 kubelet[3515]: I0812 23:41:55.995592 3515 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-88" Aug 12 23:41:55.995875 kubelet[3515]: I0812 23:41:55.995770 3515 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-88" Aug 12 23:41:55.999635 kubelet[3515]: I0812 23:41:55.996829 3515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-88" Aug 12 23:41:55.999635 kubelet[3515]: I0812 23:41:55.997187 3515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-88" Aug 12 23:41:55.999635 kubelet[3515]: I0812 23:41:55.999156 3515 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:56.021149 kubelet[3515]: E0812 23:41:56.018713 3515 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-88\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-88" Aug 12 23:41:56.073563 kubelet[3515]: I0812 23:41:56.073284 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d208ab98c97d71c2fca6835c9a5b238a-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-88\" (UID: \"d208ab98c97d71c2fca6835c9a5b238a\") " pod="kube-system/kube-scheduler-ip-172-31-28-88" Aug 12 23:41:56.073792 kubelet[3515]: I0812 23:41:56.073572 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24137d7a5e21b008566ed1c8db3a972e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"24137d7a5e21b008566ed1c8db3a972e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:56.073792 kubelet[3515]: I0812 23:41:56.073631 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24137d7a5e21b008566ed1c8db3a972e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"24137d7a5e21b008566ed1c8db3a972e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:56.073792 kubelet[3515]: I0812 23:41:56.073670 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24137d7a5e21b008566ed1c8db3a972e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"24137d7a5e21b008566ed1c8db3a972e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:56.073792 kubelet[3515]: I0812 23:41:56.073716 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2cec5d9a90c6cb09f5662b33603866e-ca-certs\") pod \"kube-apiserver-ip-172-31-28-88\" (UID: \"d2cec5d9a90c6cb09f5662b33603866e\") " pod="kube-system/kube-apiserver-ip-172-31-28-88" Aug 12 23:41:56.073792 kubelet[3515]: I0812 23:41:56.073754 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2cec5d9a90c6cb09f5662b33603866e-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-88\" (UID: \"d2cec5d9a90c6cb09f5662b33603866e\") " pod="kube-system/kube-apiserver-ip-172-31-28-88" Aug 12 23:41:56.074061 kubelet[3515]: I0812 23:41:56.073801 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2cec5d9a90c6cb09f5662b33603866e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-88\" (UID: \"d2cec5d9a90c6cb09f5662b33603866e\") " pod="kube-system/kube-apiserver-ip-172-31-28-88" Aug 12 23:41:56.074061 kubelet[3515]: I0812 23:41:56.073893 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24137d7a5e21b008566ed1c8db3a972e-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"24137d7a5e21b008566ed1c8db3a972e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:56.074061 kubelet[3515]: I0812 23:41:56.073960 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/24137d7a5e21b008566ed1c8db3a972e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"24137d7a5e21b008566ed1c8db3a972e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Aug 12 23:41:56.603315 kubelet[3515]: I0812 23:41:56.602963 3515 apiserver.go:52] "Watching apiserver" Aug 12 23:41:56.671059 kubelet[3515]: I0812 23:41:56.670990 3515 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:41:56.780904 kubelet[3515]: I0812 23:41:56.780782 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-88" podStartSLOduration=4.780758153 podStartE2EDuration="4.780758153s" podCreationTimestamp="2025-08-12 23:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:41:56.750964361 +0000 UTC m=+1.319143028" watchObservedRunningTime="2025-08-12 23:41:56.780758153 +0000 UTC m=+1.348936832" Aug 12 23:41:56.781359 kubelet[3515]: I0812 23:41:56.780968 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-88" podStartSLOduration=0.780957833 podStartE2EDuration="780.957833ms" podCreationTimestamp="2025-08-12 23:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:41:56.779129057 +0000 UTC m=+1.347307736" watchObservedRunningTime="2025-08-12 23:41:56.780957833 +0000 UTC m=+1.349136500" Aug 12 23:41:56.804691 kubelet[3515]: I0812 23:41:56.804474 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-88" podStartSLOduration=0.804451841 podStartE2EDuration="804.451841ms" podCreationTimestamp="2025-08-12 23:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:41:56.804365201 +0000 UTC m=+1.372543880" watchObservedRunningTime="2025-08-12 23:41:56.804451841 +0000 UTC m=+1.372630520" Aug 12 23:42:01.386663 kubelet[3515]: I0812 23:42:01.386597 3515 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:42:01.388267 containerd[2013]: time="2025-08-12T23:42:01.387731012Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:42:01.389395 kubelet[3515]: I0812 23:42:01.388048 3515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:42:02.276321 systemd[1]: Created slice kubepods-besteffort-pode91aabeb_e56f_437e_b449_a670b74b69df.slice - libcontainer container kubepods-besteffort-pode91aabeb_e56f_437e_b449_a670b74b69df.slice. Aug 12 23:42:02.313034 kubelet[3515]: I0812 23:42:02.312908 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e91aabeb-e56f-437e-b449-a670b74b69df-lib-modules\") pod \"kube-proxy-8bfp5\" (UID: \"e91aabeb-e56f-437e-b449-a670b74b69df\") " pod="kube-system/kube-proxy-8bfp5" Aug 12 23:42:02.313213 kubelet[3515]: I0812 23:42:02.313058 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e91aabeb-e56f-437e-b449-a670b74b69df-kube-proxy\") pod \"kube-proxy-8bfp5\" (UID: \"e91aabeb-e56f-437e-b449-a670b74b69df\") " pod="kube-system/kube-proxy-8bfp5" Aug 12 23:42:02.313213 kubelet[3515]: I0812 23:42:02.313154 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e91aabeb-e56f-437e-b449-a670b74b69df-xtables-lock\") pod \"kube-proxy-8bfp5\" (UID: \"e91aabeb-e56f-437e-b449-a670b74b69df\") " pod="kube-system/kube-proxy-8bfp5" Aug 12 23:42:02.313353 kubelet[3515]: I0812 23:42:02.313297 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbssd\" (UniqueName: \"kubernetes.io/projected/e91aabeb-e56f-437e-b449-a670b74b69df-kube-api-access-rbssd\") pod \"kube-proxy-8bfp5\" (UID: \"e91aabeb-e56f-437e-b449-a670b74b69df\") " pod="kube-system/kube-proxy-8bfp5" Aug 12 23:42:02.456623 systemd[1]: Created slice kubepods-besteffort-pod86f782df_8568_4bf4_b5c4_176e48a9418f.slice - libcontainer container kubepods-besteffort-pod86f782df_8568_4bf4_b5c4_176e48a9418f.slice. Aug 12 23:42:02.514404 kubelet[3515]: I0812 23:42:02.514334 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjs95\" (UniqueName: \"kubernetes.io/projected/86f782df-8568-4bf4-b5c4-176e48a9418f-kube-api-access-vjs95\") pod \"tigera-operator-747864d56d-p2wmj\" (UID: \"86f782df-8568-4bf4-b5c4-176e48a9418f\") " pod="tigera-operator/tigera-operator-747864d56d-p2wmj" Aug 12 23:42:02.515089 kubelet[3515]: I0812 23:42:02.515057 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86f782df-8568-4bf4-b5c4-176e48a9418f-var-lib-calico\") pod \"tigera-operator-747864d56d-p2wmj\" (UID: \"86f782df-8568-4bf4-b5c4-176e48a9418f\") " pod="tigera-operator/tigera-operator-747864d56d-p2wmj" Aug 12 23:42:02.595367 containerd[2013]: time="2025-08-12T23:42:02.595301446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8bfp5,Uid:e91aabeb-e56f-437e-b449-a670b74b69df,Namespace:kube-system,Attempt:0,}" Aug 12 23:42:02.655433 containerd[2013]: time="2025-08-12T23:42:02.655354246Z" level=info msg="connecting to shim 3f82c1e08aa053dc01960a575bb0f18669dce80eacec54d653604f337e5a1ba7" address="unix:///run/containerd/s/3d57d3b708fb2fa0cacc3734d29b19c55dd45dfd439142bc2789b6737286030a" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:42:02.718606 systemd[1]: Started cri-containerd-3f82c1e08aa053dc01960a575bb0f18669dce80eacec54d653604f337e5a1ba7.scope - libcontainer container 3f82c1e08aa053dc01960a575bb0f18669dce80eacec54d653604f337e5a1ba7. Aug 12 23:42:02.764784 containerd[2013]: time="2025-08-12T23:42:02.764676599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8bfp5,Uid:e91aabeb-e56f-437e-b449-a670b74b69df,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f82c1e08aa053dc01960a575bb0f18669dce80eacec54d653604f337e5a1ba7\"" Aug 12 23:42:02.772670 containerd[2013]: time="2025-08-12T23:42:02.772547063Z" level=info msg="CreateContainer within sandbox \"3f82c1e08aa053dc01960a575bb0f18669dce80eacec54d653604f337e5a1ba7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:42:02.781615 containerd[2013]: time="2025-08-12T23:42:02.781502639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-p2wmj,Uid:86f782df-8568-4bf4-b5c4-176e48a9418f,Namespace:tigera-operator,Attempt:0,}" Aug 12 23:42:02.814953 containerd[2013]: time="2025-08-12T23:42:02.814826951Z" level=info msg="Container e97b491d8ff373152d55440b30311f037c14b7514c5ea396bdede31629c35f5f: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:42:02.838274 containerd[2013]: time="2025-08-12T23:42:02.837972575Z" level=info msg="CreateContainer within sandbox \"3f82c1e08aa053dc01960a575bb0f18669dce80eacec54d653604f337e5a1ba7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e97b491d8ff373152d55440b30311f037c14b7514c5ea396bdede31629c35f5f\"" Aug 12 23:42:02.839802 containerd[2013]: time="2025-08-12T23:42:02.839734271Z" level=info msg="StartContainer for \"e97b491d8ff373152d55440b30311f037c14b7514c5ea396bdede31629c35f5f\"" Aug 12 23:42:02.842406 containerd[2013]: time="2025-08-12T23:42:02.842303519Z" level=info msg="connecting to shim f81764757b478bee46d9e01d1ce3522024cb76b1adf57f0f95820661c79715f5" address="unix:///run/containerd/s/be343ad32399e8abe0de469965fab5d8e5c55749d810910ebd43de00227faa2e" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:42:02.856273 containerd[2013]: time="2025-08-12T23:42:02.856007027Z" level=info msg="connecting to shim e97b491d8ff373152d55440b30311f037c14b7514c5ea396bdede31629c35f5f" address="unix:///run/containerd/s/3d57d3b708fb2fa0cacc3734d29b19c55dd45dfd439142bc2789b6737286030a" protocol=ttrpc version=3 Aug 12 23:42:02.901944 systemd[1]: Started cri-containerd-e97b491d8ff373152d55440b30311f037c14b7514c5ea396bdede31629c35f5f.scope - libcontainer container e97b491d8ff373152d55440b30311f037c14b7514c5ea396bdede31629c35f5f. Aug 12 23:42:02.923860 systemd[1]: Started cri-containerd-f81764757b478bee46d9e01d1ce3522024cb76b1adf57f0f95820661c79715f5.scope - libcontainer container f81764757b478bee46d9e01d1ce3522024cb76b1adf57f0f95820661c79715f5. Aug 12 23:42:03.029598 containerd[2013]: time="2025-08-12T23:42:03.029534996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-p2wmj,Uid:86f782df-8568-4bf4-b5c4-176e48a9418f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f81764757b478bee46d9e01d1ce3522024cb76b1adf57f0f95820661c79715f5\"" Aug 12 23:42:03.034539 containerd[2013]: time="2025-08-12T23:42:03.034463096Z" level=info msg="StartContainer for \"e97b491d8ff373152d55440b30311f037c14b7514c5ea396bdede31629c35f5f\" returns successfully" Aug 12 23:42:03.041129 containerd[2013]: time="2025-08-12T23:42:03.041027504Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 12 23:42:03.467090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870030886.mount: Deactivated successfully. Aug 12 23:42:03.813163 kubelet[3515]: I0812 23:42:03.812347 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8bfp5" podStartSLOduration=1.812321616 podStartE2EDuration="1.812321616s" podCreationTimestamp="2025-08-12 23:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:42:03.811839312 +0000 UTC m=+8.380017979" watchObservedRunningTime="2025-08-12 23:42:03.812321616 +0000 UTC m=+8.380500271" Aug 12 23:42:04.404265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3080203507.mount: Deactivated successfully. Aug 12 23:42:05.256068 containerd[2013]: time="2025-08-12T23:42:05.255781187Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:05.257335 containerd[2013]: time="2025-08-12T23:42:05.257242871Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 12 23:42:05.258770 containerd[2013]: time="2025-08-12T23:42:05.258694415Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:05.264290 containerd[2013]: time="2025-08-12T23:42:05.263149763Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:05.264735 containerd[2013]: time="2025-08-12T23:42:05.264612155Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.223486827s" Aug 12 23:42:05.264735 containerd[2013]: time="2025-08-12T23:42:05.264666935Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 12 23:42:05.271547 containerd[2013]: time="2025-08-12T23:42:05.271112651Z" level=info msg="CreateContainer within sandbox \"f81764757b478bee46d9e01d1ce3522024cb76b1adf57f0f95820661c79715f5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 12 23:42:05.285649 containerd[2013]: time="2025-08-12T23:42:05.283141271Z" level=info msg="Container 9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:42:05.296355 containerd[2013]: time="2025-08-12T23:42:05.296287415Z" level=info msg="CreateContainer within sandbox \"f81764757b478bee46d9e01d1ce3522024cb76b1adf57f0f95820661c79715f5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337\"" Aug 12 23:42:05.299346 containerd[2013]: time="2025-08-12T23:42:05.299290811Z" level=info msg="StartContainer for \"9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337\"" Aug 12 23:42:05.301382 containerd[2013]: time="2025-08-12T23:42:05.301327331Z" level=info msg="connecting to shim 9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337" address="unix:///run/containerd/s/be343ad32399e8abe0de469965fab5d8e5c55749d810910ebd43de00227faa2e" protocol=ttrpc version=3 Aug 12 23:42:05.360521 systemd[1]: Started cri-containerd-9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337.scope - libcontainer container 9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337. Aug 12 23:42:05.414185 containerd[2013]: time="2025-08-12T23:42:05.414086724Z" level=info msg="StartContainer for \"9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337\" returns successfully" Aug 12 23:42:13.904292 sudo[2386]: pam_unix(sudo:session): session closed for user root Aug 12 23:42:13.930250 sshd[2385]: Connection closed by 139.178.68.195 port 37890 Aug 12 23:42:13.931509 sshd-session[2383]: pam_unix(sshd:session): session closed for user core Aug 12 23:42:13.940898 systemd[1]: sshd@8-172.31.28.88:22-139.178.68.195:37890.service: Deactivated successfully. Aug 12 23:42:13.947832 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:42:13.948391 systemd[1]: session-9.scope: Consumed 10.121s CPU time, 232.5M memory peak. Aug 12 23:42:13.956944 systemd-logind[1986]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:42:13.961319 systemd-logind[1986]: Removed session 9. Aug 12 23:42:27.754955 kubelet[3515]: I0812 23:42:27.754843 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-p2wmj" podStartSLOduration=23.525512444 podStartE2EDuration="25.754821275s" podCreationTimestamp="2025-08-12 23:42:02 +0000 UTC" firstStartedPulling="2025-08-12 23:42:03.037988468 +0000 UTC m=+7.606167135" lastFinishedPulling="2025-08-12 23:42:05.267297299 +0000 UTC m=+9.835475966" observedRunningTime="2025-08-12 23:42:05.823904258 +0000 UTC m=+10.392082925" watchObservedRunningTime="2025-08-12 23:42:27.754821275 +0000 UTC m=+32.322999930" Aug 12 23:42:27.774681 systemd[1]: Created slice kubepods-besteffort-podb5ffa2cd_8f7f_4ece_bd78_5fbe638ba3d5.slice - libcontainer container kubepods-besteffort-podb5ffa2cd_8f7f_4ece_bd78_5fbe638ba3d5.slice. Aug 12 23:42:27.896895 kubelet[3515]: I0812 23:42:27.896808 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b5ffa2cd-8f7f-4ece-bd78-5fbe638ba3d5-typha-certs\") pod \"calico-typha-679f46b8b8-qtzdm\" (UID: \"b5ffa2cd-8f7f-4ece-bd78-5fbe638ba3d5\") " pod="calico-system/calico-typha-679f46b8b8-qtzdm" Aug 12 23:42:27.896895 kubelet[3515]: I0812 23:42:27.896894 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8x2x\" (UniqueName: \"kubernetes.io/projected/b5ffa2cd-8f7f-4ece-bd78-5fbe638ba3d5-kube-api-access-n8x2x\") pod \"calico-typha-679f46b8b8-qtzdm\" (UID: \"b5ffa2cd-8f7f-4ece-bd78-5fbe638ba3d5\") " pod="calico-system/calico-typha-679f46b8b8-qtzdm" Aug 12 23:42:27.897109 kubelet[3515]: I0812 23:42:27.896940 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5ffa2cd-8f7f-4ece-bd78-5fbe638ba3d5-tigera-ca-bundle\") pod \"calico-typha-679f46b8b8-qtzdm\" (UID: \"b5ffa2cd-8f7f-4ece-bd78-5fbe638ba3d5\") " pod="calico-system/calico-typha-679f46b8b8-qtzdm" Aug 12 23:42:28.088904 containerd[2013]: time="2025-08-12T23:42:28.088686801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-679f46b8b8-qtzdm,Uid:b5ffa2cd-8f7f-4ece-bd78-5fbe638ba3d5,Namespace:calico-system,Attempt:0,}" Aug 12 23:42:28.099031 kubelet[3515]: I0812 23:42:28.098984 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/049a95f5-f610-4c94-a822-c8556529a6e2-var-run-calico\") pod \"calico-node-j5klq\" (UID: \"049a95f5-f610-4c94-a822-c8556529a6e2\") " pod="calico-system/calico-node-j5klq" Aug 12 23:42:28.103646 systemd[1]: Created slice kubepods-besteffort-pod049a95f5_f610_4c94_a822_c8556529a6e2.slice - libcontainer container kubepods-besteffort-pod049a95f5_f610_4c94_a822_c8556529a6e2.slice. Aug 12 23:42:28.107578 kubelet[3515]: I0812 23:42:28.104933 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/049a95f5-f610-4c94-a822-c8556529a6e2-xtables-lock\") pod \"calico-node-j5klq\" (UID: \"049a95f5-f610-4c94-a822-c8556529a6e2\") " pod="calico-system/calico-node-j5klq" Aug 12 23:42:28.107578 kubelet[3515]: I0812 23:42:28.106346 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/049a95f5-f610-4c94-a822-c8556529a6e2-lib-modules\") pod \"calico-node-j5klq\" (UID: \"049a95f5-f610-4c94-a822-c8556529a6e2\") " pod="calico-system/calico-node-j5klq" Aug 12 23:42:28.107578 kubelet[3515]: I0812 23:42:28.106435 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/049a95f5-f610-4c94-a822-c8556529a6e2-tigera-ca-bundle\") pod \"calico-node-j5klq\" (UID: \"049a95f5-f610-4c94-a822-c8556529a6e2\") " pod="calico-system/calico-node-j5klq" Aug 12 23:42:28.107578 kubelet[3515]: I0812 23:42:28.106484 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/049a95f5-f610-4c94-a822-c8556529a6e2-policysync\") pod \"calico-node-j5klq\" (UID: \"049a95f5-f610-4c94-a822-c8556529a6e2\") " pod="calico-system/calico-node-j5klq" Aug 12 23:42:28.107578 kubelet[3515]: I0812 23:42:28.106558 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/049a95f5-f610-4c94-a822-c8556529a6e2-flexvol-driver-host\") pod \"calico-node-j5klq\" (UID: \"049a95f5-f610-4c94-a822-c8556529a6e2\") " pod="calico-system/calico-node-j5klq" Aug 12 23:42:28.107893 kubelet[3515]: I0812 23:42:28.106770 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/049a95f5-f610-4c94-a822-c8556529a6e2-node-certs\") pod \"calico-node-j5klq\" (UID: \"049a95f5-f610-4c94-a822-c8556529a6e2\") " pod="calico-system/calico-node-j5klq" Aug 12 23:42:28.107893 kubelet[3515]: I0812 23:42:28.107013 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/049a95f5-f610-4c94-a822-c8556529a6e2-cni-net-dir\") pod \"calico-node-j5klq\" (UID: \"049a95f5-f610-4c94-a822-c8556529a6e2\") " pod="calico-system/calico-node-j5klq" Aug 12 23:42:28.107893 kubelet[3515]: I0812 23:42:28.107063 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/049a95f5-f610-4c94-a822-c8556529a6e2-cni-log-dir\") pod \"calico-node-j5klq\" (UID: \"049a95f5-f610-4c94-a822-c8556529a6e2\") " pod="calico-system/calico-node-j5klq" Aug 12 23:42:28.110433 kubelet[3515]: I0812 23:42:28.107211 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/049a95f5-f610-4c94-a822-c8556529a6e2-cni-bin-dir\") pod \"calico-node-j5klq\" (UID: \"049a95f5-f610-4c94-a822-c8556529a6e2\") " pod="calico-system/calico-node-j5klq" Aug 12 23:42:28.110433 kubelet[3515]: I0812 23:42:28.110062 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkv56\" (UniqueName: \"kubernetes.io/projected/049a95f5-f610-4c94-a822-c8556529a6e2-kube-api-access-nkv56\") pod \"calico-node-j5klq\" (UID: \"049a95f5-f610-4c94-a822-c8556529a6e2\") " pod="calico-system/calico-node-j5klq" Aug 12 23:42:28.110433 kubelet[3515]: I0812 23:42:28.110160 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/049a95f5-f610-4c94-a822-c8556529a6e2-var-lib-calico\") pod \"calico-node-j5klq\" (UID: \"049a95f5-f610-4c94-a822-c8556529a6e2\") " pod="calico-system/calico-node-j5klq" Aug 12 23:42:28.161616 containerd[2013]: time="2025-08-12T23:42:28.161536497Z" level=info msg="connecting to shim d7c3d268b2f128d362d150f8c88598c6795a6319db6cb293e643f93b3d810f44" address="unix:///run/containerd/s/8f153199477682e6c7bc2990a2e2e0d54152b9d3fd8d9b4df8d8a8a8d60c5d5e" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:42:28.214550 kubelet[3515]: E0812 23:42:28.214503 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.215259 kubelet[3515]: W0812 23:42:28.214726 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.215259 kubelet[3515]: E0812 23:42:28.214771 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.217163 kubelet[3515]: E0812 23:42:28.217017 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.217163 kubelet[3515]: W0812 23:42:28.217090 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.217163 kubelet[3515]: E0812 23:42:28.217124 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.221534 kubelet[3515]: E0812 23:42:28.221471 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.221534 kubelet[3515]: W0812 23:42:28.221521 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.221748 kubelet[3515]: E0812 23:42:28.221556 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.224840 kubelet[3515]: E0812 23:42:28.224781 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.224840 kubelet[3515]: W0812 23:42:28.224818 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.224840 kubelet[3515]: E0812 23:42:28.224865 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.228005 kubelet[3515]: E0812 23:42:28.227954 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.228933 kubelet[3515]: W0812 23:42:28.227994 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.228933 kubelet[3515]: E0812 23:42:28.228112 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.240503 kubelet[3515]: E0812 23:42:28.238243 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.240503 kubelet[3515]: W0812 23:42:28.240291 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.240503 kubelet[3515]: E0812 23:42:28.240342 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.276840 systemd[1]: Started cri-containerd-d7c3d268b2f128d362d150f8c88598c6795a6319db6cb293e643f93b3d810f44.scope - libcontainer container d7c3d268b2f128d362d150f8c88598c6795a6319db6cb293e643f93b3d810f44. Aug 12 23:42:28.293489 kubelet[3515]: E0812 23:42:28.293438 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.293489 kubelet[3515]: W0812 23:42:28.293478 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.293766 kubelet[3515]: E0812 23:42:28.293538 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.418859 containerd[2013]: time="2025-08-12T23:42:28.418634218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j5klq,Uid:049a95f5-f610-4c94-a822-c8556529a6e2,Namespace:calico-system,Attempt:0,}" Aug 12 23:42:28.469105 containerd[2013]: time="2025-08-12T23:42:28.469046530Z" level=info msg="connecting to shim 905523c23e5a5fb5f4e31b48aa24cf40d6fb8655e27a3c501eac62b4cf382e90" address="unix:///run/containerd/s/747e430f5d68dfd472e48d1c592cfc91005a82b957f70eba3c4b9cb2d80c457f" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:42:28.581993 systemd[1]: Started cri-containerd-905523c23e5a5fb5f4e31b48aa24cf40d6fb8655e27a3c501eac62b4cf382e90.scope - libcontainer container 905523c23e5a5fb5f4e31b48aa24cf40d6fb8655e27a3c501eac62b4cf382e90. Aug 12 23:42:28.677863 kubelet[3515]: E0812 23:42:28.677337 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sh6l" podUID="f465d6b2-8aef-4866-ba3f-bfdd97688b16" Aug 12 23:42:28.705976 kubelet[3515]: E0812 23:42:28.705449 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.705976 kubelet[3515]: W0812 23:42:28.705894 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.706509 kubelet[3515]: E0812 23:42:28.706341 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.706961 kubelet[3515]: E0812 23:42:28.706875 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.707351 kubelet[3515]: W0812 23:42:28.706903 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.707351 kubelet[3515]: E0812 23:42:28.707306 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.708730 kubelet[3515]: E0812 23:42:28.708626 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.709072 kubelet[3515]: W0812 23:42:28.708664 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.709072 kubelet[3515]: E0812 23:42:28.708940 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.710562 kubelet[3515]: E0812 23:42:28.709854 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.710562 kubelet[3515]: W0812 23:42:28.709883 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.710562 kubelet[3515]: E0812 23:42:28.709909 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.711343 kubelet[3515]: E0812 23:42:28.711312 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.711573 kubelet[3515]: W0812 23:42:28.711499 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.711573 kubelet[3515]: E0812 23:42:28.711537 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.712792 kubelet[3515]: E0812 23:42:28.712755 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.713204 kubelet[3515]: W0812 23:42:28.712964 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.713204 kubelet[3515]: E0812 23:42:28.713007 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.714163 kubelet[3515]: E0812 23:42:28.713938 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.714163 kubelet[3515]: W0812 23:42:28.713969 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.714825 kubelet[3515]: E0812 23:42:28.713996 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.717834 kubelet[3515]: E0812 23:42:28.717610 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.718196 kubelet[3515]: W0812 23:42:28.717665 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.718196 kubelet[3515]: E0812 23:42:28.718040 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.719651 kubelet[3515]: E0812 23:42:28.719259 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.719651 kubelet[3515]: W0812 23:42:28.719292 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.719651 kubelet[3515]: E0812 23:42:28.719326 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.720756 kubelet[3515]: E0812 23:42:28.720716 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.721517 kubelet[3515]: W0812 23:42:28.720976 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.721517 kubelet[3515]: E0812 23:42:28.721019 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.722259 kubelet[3515]: E0812 23:42:28.722013 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.722485 kubelet[3515]: W0812 23:42:28.722446 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.722952 kubelet[3515]: E0812 23:42:28.722581 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.723491 kubelet[3515]: E0812 23:42:28.723330 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.723669 kubelet[3515]: W0812 23:42:28.723618 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.723845 kubelet[3515]: E0812 23:42:28.723780 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.726169 kubelet[3515]: E0812 23:42:28.725615 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.726169 kubelet[3515]: W0812 23:42:28.725710 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.726169 kubelet[3515]: E0812 23:42:28.725822 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.727609 kubelet[3515]: E0812 23:42:28.726689 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.727609 kubelet[3515]: W0812 23:42:28.727015 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.727609 kubelet[3515]: E0812 23:42:28.727055 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.728175 kubelet[3515]: E0812 23:42:28.728100 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.728707 kubelet[3515]: W0812 23:42:28.728665 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.729138 kubelet[3515]: E0812 23:42:28.728854 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.729646 kubelet[3515]: E0812 23:42:28.729618 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.729898 kubelet[3515]: W0812 23:42:28.729833 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.730305 kubelet[3515]: E0812 23:42:28.730277 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.731665 kubelet[3515]: E0812 23:42:28.731438 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.731665 kubelet[3515]: W0812 23:42:28.731476 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.731665 kubelet[3515]: E0812 23:42:28.731508 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.733418 kubelet[3515]: E0812 23:42:28.733048 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.733418 kubelet[3515]: W0812 23:42:28.733082 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.733418 kubelet[3515]: E0812 23:42:28.733113 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.734427 kubelet[3515]: E0812 23:42:28.734350 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.734735 kubelet[3515]: W0812 23:42:28.734683 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.734931 kubelet[3515]: E0812 23:42:28.734858 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.735640 kubelet[3515]: E0812 23:42:28.735589 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.736497 kubelet[3515]: W0812 23:42:28.735824 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.736497 kubelet[3515]: E0812 23:42:28.735863 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.737414 kubelet[3515]: E0812 23:42:28.737298 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.737861 kubelet[3515]: W0812 23:42:28.737764 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.738095 kubelet[3515]: E0812 23:42:28.738030 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.738577 kubelet[3515]: I0812 23:42:28.738538 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f465d6b2-8aef-4866-ba3f-bfdd97688b16-socket-dir\") pod \"csi-node-driver-8sh6l\" (UID: \"f465d6b2-8aef-4866-ba3f-bfdd97688b16\") " pod="calico-system/csi-node-driver-8sh6l" Aug 12 23:42:28.739500 kubelet[3515]: E0812 23:42:28.739449 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.739500 kubelet[3515]: W0812 23:42:28.739485 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.739995 kubelet[3515]: E0812 23:42:28.739538 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.740520 kubelet[3515]: E0812 23:42:28.740476 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.740520 kubelet[3515]: W0812 23:42:28.740513 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.740882 kubelet[3515]: E0812 23:42:28.740555 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.744499 kubelet[3515]: E0812 23:42:28.744442 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.744499 kubelet[3515]: W0812 23:42:28.744483 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.744773 kubelet[3515]: E0812 23:42:28.744517 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.744773 kubelet[3515]: I0812 23:42:28.744574 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f465d6b2-8aef-4866-ba3f-bfdd97688b16-registration-dir\") pod \"csi-node-driver-8sh6l\" (UID: \"f465d6b2-8aef-4866-ba3f-bfdd97688b16\") " pod="calico-system/csi-node-driver-8sh6l" Aug 12 23:42:28.748555 kubelet[3515]: E0812 23:42:28.748456 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.748555 kubelet[3515]: W0812 23:42:28.748505 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.748555 kubelet[3515]: E0812 23:42:28.748555 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.748794 kubelet[3515]: I0812 23:42:28.748602 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797nj\" (UniqueName: \"kubernetes.io/projected/f465d6b2-8aef-4866-ba3f-bfdd97688b16-kube-api-access-797nj\") pod \"csi-node-driver-8sh6l\" (UID: \"f465d6b2-8aef-4866-ba3f-bfdd97688b16\") " pod="calico-system/csi-node-driver-8sh6l" Aug 12 23:42:28.749673 kubelet[3515]: E0812 23:42:28.749433 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.749673 kubelet[3515]: W0812 23:42:28.749476 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.749673 kubelet[3515]: E0812 23:42:28.749538 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.749673 kubelet[3515]: I0812 23:42:28.749601 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f465d6b2-8aef-4866-ba3f-bfdd97688b16-kubelet-dir\") pod \"csi-node-driver-8sh6l\" (UID: \"f465d6b2-8aef-4866-ba3f-bfdd97688b16\") " pod="calico-system/csi-node-driver-8sh6l" Aug 12 23:42:28.751154 kubelet[3515]: E0812 23:42:28.750621 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.751154 kubelet[3515]: W0812 23:42:28.750649 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.751870 kubelet[3515]: E0812 23:42:28.751718 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.752553 kubelet[3515]: E0812 23:42:28.752471 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.752553 kubelet[3515]: W0812 23:42:28.752515 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.753560 kubelet[3515]: E0812 23:42:28.753454 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.754061 kubelet[3515]: E0812 23:42:28.753607 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.754061 kubelet[3515]: W0812 23:42:28.753626 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.754061 kubelet[3515]: E0812 23:42:28.753693 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.757033 kubelet[3515]: I0812 23:42:28.756493 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f465d6b2-8aef-4866-ba3f-bfdd97688b16-varrun\") pod \"csi-node-driver-8sh6l\" (UID: \"f465d6b2-8aef-4866-ba3f-bfdd97688b16\") " pod="calico-system/csi-node-driver-8sh6l" Aug 12 23:42:28.757033 kubelet[3515]: E0812 23:42:28.756708 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.757033 kubelet[3515]: W0812 23:42:28.756730 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.759618 kubelet[3515]: E0812 23:42:28.758427 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.759618 kubelet[3515]: E0812 23:42:28.758879 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.759618 kubelet[3515]: W0812 23:42:28.758903 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.759618 kubelet[3515]: E0812 23:42:28.758928 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.759618 kubelet[3515]: E0812 23:42:28.759376 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.759618 kubelet[3515]: W0812 23:42:28.759397 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.759618 kubelet[3515]: E0812 23:42:28.759439 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.760409 kubelet[3515]: E0812 23:42:28.760244 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.760409 kubelet[3515]: W0812 23:42:28.760280 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.760409 kubelet[3515]: E0812 23:42:28.760310 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.762592 kubelet[3515]: E0812 23:42:28.762422 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.762592 kubelet[3515]: W0812 23:42:28.762463 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.762592 kubelet[3515]: E0812 23:42:28.762494 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.764953 kubelet[3515]: E0812 23:42:28.763466 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.764953 kubelet[3515]: W0812 23:42:28.763494 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.764953 kubelet[3515]: E0812 23:42:28.763523 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.810859 containerd[2013]: time="2025-08-12T23:42:28.810615552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j5klq,Uid:049a95f5-f610-4c94-a822-c8556529a6e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"905523c23e5a5fb5f4e31b48aa24cf40d6fb8655e27a3c501eac62b4cf382e90\"" Aug 12 23:42:28.817734 containerd[2013]: time="2025-08-12T23:42:28.817470864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 12 23:42:28.860369 kubelet[3515]: E0812 23:42:28.860306 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.860771 kubelet[3515]: W0812 23:42:28.860340 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.860771 kubelet[3515]: E0812 23:42:28.860574 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.862361 kubelet[3515]: E0812 23:42:28.862276 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.862361 kubelet[3515]: W0812 23:42:28.862321 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.862670 kubelet[3515]: E0812 23:42:28.862624 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.864045 kubelet[3515]: E0812 23:42:28.863978 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.864344 kubelet[3515]: W0812 23:42:28.864013 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.864344 kubelet[3515]: E0812 23:42:28.864286 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.865030 kubelet[3515]: E0812 23:42:28.864965 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.865030 kubelet[3515]: W0812 23:42:28.864995 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.866479 kubelet[3515]: E0812 23:42:28.866392 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.867296 kubelet[3515]: E0812 23:42:28.867206 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.867296 kubelet[3515]: W0812 23:42:28.867258 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.867695 kubelet[3515]: E0812 23:42:28.867600 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.868424 kubelet[3515]: E0812 23:42:28.868360 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.868424 kubelet[3515]: W0812 23:42:28.868390 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.868692 kubelet[3515]: E0812 23:42:28.868667 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.869649 kubelet[3515]: E0812 23:42:28.869594 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.869951 kubelet[3515]: W0812 23:42:28.869902 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.870328 kubelet[3515]: E0812 23:42:28.870301 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.871907 kubelet[3515]: E0812 23:42:28.871825 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.871907 kubelet[3515]: W0812 23:42:28.871860 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.872340 kubelet[3515]: E0812 23:42:28.872216 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.873279 kubelet[3515]: E0812 23:42:28.873159 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.873279 kubelet[3515]: W0812 23:42:28.873191 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.874437 kubelet[3515]: E0812 23:42:28.874372 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.875585 kubelet[3515]: E0812 23:42:28.875371 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.875585 kubelet[3515]: W0812 23:42:28.875531 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.876251 kubelet[3515]: E0812 23:42:28.876152 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.877970 kubelet[3515]: E0812 23:42:28.877813 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.878397 kubelet[3515]: W0812 23:42:28.878151 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.878812 kubelet[3515]: E0812 23:42:28.878743 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.880691 kubelet[3515]: E0812 23:42:28.880512 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.880691 kubelet[3515]: W0812 23:42:28.880546 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.881608 kubelet[3515]: E0812 23:42:28.881401 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.882062 kubelet[3515]: E0812 23:42:28.882017 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.882442 kubelet[3515]: W0812 23:42:28.882266 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.882442 kubelet[3515]: E0812 23:42:28.882362 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.883306 kubelet[3515]: E0812 23:42:28.883021 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.883306 kubelet[3515]: W0812 23:42:28.883154 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.883306 kubelet[3515]: E0812 23:42:28.883245 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.884270 kubelet[3515]: E0812 23:42:28.884145 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.884688 kubelet[3515]: W0812 23:42:28.884542 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.884688 kubelet[3515]: E0812 23:42:28.884631 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.886293 kubelet[3515]: E0812 23:42:28.885404 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.886529 kubelet[3515]: W0812 23:42:28.886490 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.886706 kubelet[3515]: E0812 23:42:28.886665 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.887170 kubelet[3515]: E0812 23:42:28.887141 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.887459 kubelet[3515]: W0812 23:42:28.887347 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.887459 kubelet[3515]: E0812 23:42:28.887422 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.889279 kubelet[3515]: E0812 23:42:28.887879 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.889279 kubelet[3515]: W0812 23:42:28.887905 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.889279 kubelet[3515]: E0812 23:42:28.887961 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.890789 kubelet[3515]: E0812 23:42:28.890613 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.891277 kubelet[3515]: W0812 23:42:28.890962 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.891277 kubelet[3515]: E0812 23:42:28.891089 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.894724 kubelet[3515]: E0812 23:42:28.894484 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.894724 kubelet[3515]: W0812 23:42:28.894520 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.894724 kubelet[3515]: E0812 23:42:28.894722 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.896488 kubelet[3515]: E0812 23:42:28.895863 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.896488 kubelet[3515]: W0812 23:42:28.895894 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.899012 kubelet[3515]: E0812 23:42:28.898128 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.899012 kubelet[3515]: E0812 23:42:28.898474 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.899012 kubelet[3515]: W0812 23:42:28.898498 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.899012 kubelet[3515]: E0812 23:42:28.898563 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.901502 kubelet[3515]: E0812 23:42:28.900946 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.901502 kubelet[3515]: W0812 23:42:28.900975 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.902564 kubelet[3515]: E0812 23:42:28.902433 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.904561 kubelet[3515]: E0812 23:42:28.903585 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.904561 kubelet[3515]: W0812 23:42:28.903616 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.904561 kubelet[3515]: E0812 23:42:28.903701 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.905699 kubelet[3515]: E0812 23:42:28.905666 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.907138 kubelet[3515]: W0812 23:42:28.905961 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.907138 kubelet[3515]: E0812 23:42:28.906802 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.943045 kubelet[3515]: E0812 23:42:28.942590 3515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:42:28.943045 kubelet[3515]: W0812 23:42:28.942745 3515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:42:28.943668 kubelet[3515]: E0812 23:42:28.943359 3515 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:42:28.971700 containerd[2013]: time="2025-08-12T23:42:28.971640313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-679f46b8b8-qtzdm,Uid:b5ffa2cd-8f7f-4ece-bd78-5fbe638ba3d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7c3d268b2f128d362d150f8c88598c6795a6319db6cb293e643f93b3d810f44\"" Aug 12 23:42:29.978345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1240776398.mount: Deactivated successfully. Aug 12 23:42:30.104727 containerd[2013]: time="2025-08-12T23:42:30.104594219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:30.107182 containerd[2013]: time="2025-08-12T23:42:30.107102591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Aug 12 23:42:30.109208 containerd[2013]: time="2025-08-12T23:42:30.109154987Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:30.113584 containerd[2013]: time="2025-08-12T23:42:30.113518235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:30.115656 containerd[2013]: time="2025-08-12T23:42:30.115581719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.297881847s" Aug 12 23:42:30.115879 containerd[2013]: time="2025-08-12T23:42:30.115633955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 12 23:42:30.118939 containerd[2013]: time="2025-08-12T23:42:30.118630403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 12 23:42:30.123555 containerd[2013]: time="2025-08-12T23:42:30.123377231Z" level=info msg="CreateContainer within sandbox \"905523c23e5a5fb5f4e31b48aa24cf40d6fb8655e27a3c501eac62b4cf382e90\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 12 23:42:30.152270 containerd[2013]: time="2025-08-12T23:42:30.152167319Z" level=info msg="Container 16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:42:30.158302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845399316.mount: Deactivated successfully. Aug 12 23:42:30.180450 containerd[2013]: time="2025-08-12T23:42:30.180371303Z" level=info msg="CreateContainer within sandbox \"905523c23e5a5fb5f4e31b48aa24cf40d6fb8655e27a3c501eac62b4cf382e90\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e\"" Aug 12 23:42:30.182363 containerd[2013]: time="2025-08-12T23:42:30.181314311Z" level=info msg="StartContainer for \"16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e\"" Aug 12 23:42:30.185723 containerd[2013]: time="2025-08-12T23:42:30.185496263Z" level=info msg="connecting to shim 16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e" address="unix:///run/containerd/s/747e430f5d68dfd472e48d1c592cfc91005a82b957f70eba3c4b9cb2d80c457f" protocol=ttrpc version=3 Aug 12 23:42:30.227525 systemd[1]: Started cri-containerd-16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e.scope - libcontainer container 16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e. Aug 12 23:42:30.308423 containerd[2013]: time="2025-08-12T23:42:30.308196144Z" level=info msg="StartContainer for \"16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e\" returns successfully" Aug 12 23:42:30.334990 systemd[1]: cri-containerd-16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e.scope: Deactivated successfully. Aug 12 23:42:30.339985 containerd[2013]: time="2025-08-12T23:42:30.339904476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e\" id:\"16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e\" pid:4107 exited_at:{seconds:1755042150 nanos:337898736}" Aug 12 23:42:30.339985 containerd[2013]: time="2025-08-12T23:42:30.339968472Z" level=info msg="received exit event container_id:\"16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e\" id:\"16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e\" pid:4107 exited_at:{seconds:1755042150 nanos:337898736}" Aug 12 23:42:30.380468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16b61a4f283ba3334e7dd593a7717fd7fd657a897c0ffc48186916088df5cc5e-rootfs.mount: Deactivated successfully. Aug 12 23:42:30.696830 kubelet[3515]: E0812 23:42:30.696697 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sh6l" podUID="f465d6b2-8aef-4866-ba3f-bfdd97688b16" Aug 12 23:42:32.526871 containerd[2013]: time="2025-08-12T23:42:32.526738755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:32.528278 containerd[2013]: time="2025-08-12T23:42:32.528193887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=31717828" Aug 12 23:42:32.529671 containerd[2013]: time="2025-08-12T23:42:32.529520115Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:32.534738 containerd[2013]: time="2025-08-12T23:42:32.534416751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:32.535942 containerd[2013]: time="2025-08-12T23:42:32.535875675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.416696224s" Aug 12 23:42:32.535942 containerd[2013]: time="2025-08-12T23:42:32.535937667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 12 23:42:32.539605 containerd[2013]: time="2025-08-12T23:42:32.539532159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 12 23:42:32.565102 containerd[2013]: time="2025-08-12T23:42:32.561255627Z" level=info msg="CreateContainer within sandbox \"d7c3d268b2f128d362d150f8c88598c6795a6319db6cb293e643f93b3d810f44\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 12 23:42:32.575674 containerd[2013]: time="2025-08-12T23:42:32.575607387Z" level=info msg="Container ef59d665518dfbb506465bdc82a4678ff840b5ffefdf71ae3c64077c2c013549: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:42:32.593501 containerd[2013]: time="2025-08-12T23:42:32.593413287Z" level=info msg="CreateContainer within sandbox \"d7c3d268b2f128d362d150f8c88598c6795a6319db6cb293e643f93b3d810f44\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ef59d665518dfbb506465bdc82a4678ff840b5ffefdf71ae3c64077c2c013549\"" Aug 12 23:42:32.595818 containerd[2013]: time="2025-08-12T23:42:32.594808191Z" level=info msg="StartContainer for \"ef59d665518dfbb506465bdc82a4678ff840b5ffefdf71ae3c64077c2c013549\"" Aug 12 23:42:32.599967 containerd[2013]: time="2025-08-12T23:42:32.599885163Z" level=info msg="connecting to shim ef59d665518dfbb506465bdc82a4678ff840b5ffefdf71ae3c64077c2c013549" address="unix:///run/containerd/s/8f153199477682e6c7bc2990a2e2e0d54152b9d3fd8d9b4df8d8a8a8d60c5d5e" protocol=ttrpc version=3 Aug 12 23:42:32.644909 systemd[1]: Started cri-containerd-ef59d665518dfbb506465bdc82a4678ff840b5ffefdf71ae3c64077c2c013549.scope - libcontainer container ef59d665518dfbb506465bdc82a4678ff840b5ffefdf71ae3c64077c2c013549. Aug 12 23:42:32.695534 kubelet[3515]: E0812 23:42:32.695447 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sh6l" podUID="f465d6b2-8aef-4866-ba3f-bfdd97688b16" Aug 12 23:42:32.737834 containerd[2013]: time="2025-08-12T23:42:32.737732272Z" level=info msg="StartContainer for \"ef59d665518dfbb506465bdc82a4678ff840b5ffefdf71ae3c64077c2c013549\" returns successfully" Aug 12 23:42:33.971722 kubelet[3515]: I0812 23:42:33.971523 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-679f46b8b8-qtzdm" podStartSLOduration=3.413100948 podStartE2EDuration="6.971471646s" podCreationTimestamp="2025-08-12 23:42:27 +0000 UTC" firstStartedPulling="2025-08-12 23:42:28.979392841 +0000 UTC m=+33.547571508" lastFinishedPulling="2025-08-12 23:42:32.537763491 +0000 UTC m=+37.105942206" observedRunningTime="2025-08-12 23:42:32.973530257 +0000 UTC m=+37.541708924" watchObservedRunningTime="2025-08-12 23:42:33.971471646 +0000 UTC m=+38.539650397" Aug 12 23:42:34.695916 kubelet[3515]: E0812 23:42:34.695739 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sh6l" podUID="f465d6b2-8aef-4866-ba3f-bfdd97688b16" Aug 12 23:42:35.925737 containerd[2013]: time="2025-08-12T23:42:35.925683355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:35.929387 containerd[2013]: time="2025-08-12T23:42:35.928243472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 12 23:42:35.929387 containerd[2013]: time="2025-08-12T23:42:35.929007176Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:35.931918 containerd[2013]: time="2025-08-12T23:42:35.931816472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:35.933286 containerd[2013]: time="2025-08-12T23:42:35.933191360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.393582953s" Aug 12 23:42:35.933286 containerd[2013]: time="2025-08-12T23:42:35.933266096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 12 23:42:35.940275 containerd[2013]: time="2025-08-12T23:42:35.940152056Z" level=info msg="CreateContainer within sandbox \"905523c23e5a5fb5f4e31b48aa24cf40d6fb8655e27a3c501eac62b4cf382e90\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 12 23:42:35.957898 containerd[2013]: time="2025-08-12T23:42:35.957851864Z" level=info msg="Container af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:42:35.973031 containerd[2013]: time="2025-08-12T23:42:35.972980396Z" level=info msg="CreateContainer within sandbox \"905523c23e5a5fb5f4e31b48aa24cf40d6fb8655e27a3c501eac62b4cf382e90\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e\"" Aug 12 23:42:35.974329 containerd[2013]: time="2025-08-12T23:42:35.974268680Z" level=info msg="StartContainer for \"af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e\"" Aug 12 23:42:35.978826 containerd[2013]: time="2025-08-12T23:42:35.978083996Z" level=info msg="connecting to shim af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e" address="unix:///run/containerd/s/747e430f5d68dfd472e48d1c592cfc91005a82b957f70eba3c4b9cb2d80c457f" protocol=ttrpc version=3 Aug 12 23:42:36.025590 systemd[1]: Started cri-containerd-af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e.scope - libcontainer container af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e. Aug 12 23:42:36.118163 containerd[2013]: time="2025-08-12T23:42:36.117992308Z" level=info msg="StartContainer for \"af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e\" returns successfully" Aug 12 23:42:36.695582 kubelet[3515]: E0812 23:42:36.695500 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sh6l" podUID="f465d6b2-8aef-4866-ba3f-bfdd97688b16" Aug 12 23:42:37.058600 containerd[2013]: time="2025-08-12T23:42:37.058455305Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:42:37.063307 systemd[1]: cri-containerd-af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e.scope: Deactivated successfully. Aug 12 23:42:37.064570 systemd[1]: cri-containerd-af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e.scope: Consumed 963ms CPU time, 187.3M memory peak, 165.8M written to disk. Aug 12 23:42:37.066999 containerd[2013]: time="2025-08-12T23:42:37.065807537Z" level=info msg="received exit event container_id:\"af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e\" id:\"af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e\" pid:4211 exited_at:{seconds:1755042157 nanos:65460713}" Aug 12 23:42:37.066999 containerd[2013]: time="2025-08-12T23:42:37.066497777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e\" id:\"af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e\" pid:4211 exited_at:{seconds:1755042157 nanos:65460713}" Aug 12 23:42:37.109180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af5e0b5b0078736b60d83238c8353b49ea4bc72cd1e934cc37a6dbf78b510b5e-rootfs.mount: Deactivated successfully. Aug 12 23:42:37.115385 kubelet[3515]: I0812 23:42:37.115349 3515 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 12 23:42:37.200396 systemd[1]: Created slice kubepods-burstable-poda8b776ad_ce4c_49ef_85b7_5aacc790ac0b.slice - libcontainer container kubepods-burstable-poda8b776ad_ce4c_49ef_85b7_5aacc790ac0b.slice. Aug 12 23:42:37.229315 systemd[1]: Created slice kubepods-burstable-pod6486c795_f806_48bb_bfbf_6140cfb4bdcd.slice - libcontainer container kubepods-burstable-pod6486c795_f806_48bb_bfbf_6140cfb4bdcd.slice. Aug 12 23:42:37.249899 kubelet[3515]: I0812 23:42:37.248696 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n446\" (UniqueName: \"kubernetes.io/projected/d908bbbe-0216-4d9e-bc2f-95bf8249336c-kube-api-access-8n446\") pod \"calico-kube-controllers-7df96dbd7c-shcc7\" (UID: \"d908bbbe-0216-4d9e-bc2f-95bf8249336c\") " pod="calico-system/calico-kube-controllers-7df96dbd7c-shcc7" Aug 12 23:42:37.249899 kubelet[3515]: I0812 23:42:37.248831 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6486c795-f806-48bb-bfbf-6140cfb4bdcd-config-volume\") pod \"coredns-668d6bf9bc-pmv6t\" (UID: \"6486c795-f806-48bb-bfbf-6140cfb4bdcd\") " pod="kube-system/coredns-668d6bf9bc-pmv6t" Aug 12 23:42:37.249899 kubelet[3515]: I0812 23:42:37.248890 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d908bbbe-0216-4d9e-bc2f-95bf8249336c-tigera-ca-bundle\") pod \"calico-kube-controllers-7df96dbd7c-shcc7\" (UID: \"d908bbbe-0216-4d9e-bc2f-95bf8249336c\") " pod="calico-system/calico-kube-controllers-7df96dbd7c-shcc7" Aug 12 23:42:37.249899 kubelet[3515]: I0812 23:42:37.248980 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxsj\" (UniqueName: \"kubernetes.io/projected/a8b776ad-ce4c-49ef-85b7-5aacc790ac0b-kube-api-access-dqxsj\") pod \"coredns-668d6bf9bc-mmcwn\" (UID: \"a8b776ad-ce4c-49ef-85b7-5aacc790ac0b\") " pod="kube-system/coredns-668d6bf9bc-mmcwn" Aug 12 23:42:37.249899 kubelet[3515]: I0812 23:42:37.249046 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csgn\" (UniqueName: \"kubernetes.io/projected/6486c795-f806-48bb-bfbf-6140cfb4bdcd-kube-api-access-6csgn\") pod \"coredns-668d6bf9bc-pmv6t\" (UID: \"6486c795-f806-48bb-bfbf-6140cfb4bdcd\") " pod="kube-system/coredns-668d6bf9bc-pmv6t" Aug 12 23:42:37.252211 kubelet[3515]: I0812 23:42:37.249121 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8b776ad-ce4c-49ef-85b7-5aacc790ac0b-config-volume\") pod \"coredns-668d6bf9bc-mmcwn\" (UID: \"a8b776ad-ce4c-49ef-85b7-5aacc790ac0b\") " pod="kube-system/coredns-668d6bf9bc-mmcwn" Aug 12 23:42:37.261127 systemd[1]: Created slice kubepods-besteffort-podd908bbbe_0216_4d9e_bc2f_95bf8249336c.slice - libcontainer container kubepods-besteffort-podd908bbbe_0216_4d9e_bc2f_95bf8249336c.slice. Aug 12 23:42:37.287547 systemd[1]: Created slice kubepods-besteffort-podbd6c9aed_3c52_4cf3_b20a_0312b540cfff.slice - libcontainer container kubepods-besteffort-podbd6c9aed_3c52_4cf3_b20a_0312b540cfff.slice. Aug 12 23:42:37.316320 systemd[1]: Created slice kubepods-besteffort-pod31ccd3c2_6b19_4884_a3e2_dbbd56ff7d5b.slice - libcontainer container kubepods-besteffort-pod31ccd3c2_6b19_4884_a3e2_dbbd56ff7d5b.slice. Aug 12 23:42:37.342439 systemd[1]: Created slice kubepods-besteffort-podfc9efe89_2ccd_45b4_85b7_881c9027351c.slice - libcontainer container kubepods-besteffort-podfc9efe89_2ccd_45b4_85b7_881c9027351c.slice. Aug 12 23:42:37.350353 kubelet[3515]: I0812 23:42:37.350291 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc9efe89-2ccd-45b4-85b7-881c9027351c-whisker-ca-bundle\") pod \"whisker-5cfb4ccd68-cnj74\" (UID: \"fc9efe89-2ccd-45b4-85b7-881c9027351c\") " pod="calico-system/whisker-5cfb4ccd68-cnj74" Aug 12 23:42:37.350495 kubelet[3515]: I0812 23:42:37.350408 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09ff9862-d31d-47c3-9c5e-b1b67b525562-config\") pod \"goldmane-768f4c5c69-dbgt6\" (UID: \"09ff9862-d31d-47c3-9c5e-b1b67b525562\") " pod="calico-system/goldmane-768f4c5c69-dbgt6" Aug 12 23:42:37.350495 kubelet[3515]: I0812 23:42:37.350450 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl8q5\" (UniqueName: \"kubernetes.io/projected/31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b-kube-api-access-hl8q5\") pod \"calico-apiserver-865b585bd6-h7tzr\" (UID: \"31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b\") " pod="calico-apiserver/calico-apiserver-865b585bd6-h7tzr" Aug 12 23:42:37.350613 kubelet[3515]: I0812 23:42:37.350493 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b-calico-apiserver-certs\") pod \"calico-apiserver-865b585bd6-h7tzr\" (UID: \"31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b\") " pod="calico-apiserver/calico-apiserver-865b585bd6-h7tzr" Aug 12 23:42:37.350613 kubelet[3515]: I0812 23:42:37.350530 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5d87\" (UniqueName: \"kubernetes.io/projected/09ff9862-d31d-47c3-9c5e-b1b67b525562-kube-api-access-b5d87\") pod \"goldmane-768f4c5c69-dbgt6\" (UID: \"09ff9862-d31d-47c3-9c5e-b1b67b525562\") " pod="calico-system/goldmane-768f4c5c69-dbgt6" Aug 12 23:42:37.350613 kubelet[3515]: I0812 23:42:37.350605 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd6c9aed-3c52-4cf3-b20a-0312b540cfff-calico-apiserver-certs\") pod \"calico-apiserver-865b585bd6-k8x9p\" (UID: \"bd6c9aed-3c52-4cf3-b20a-0312b540cfff\") " pod="calico-apiserver/calico-apiserver-865b585bd6-k8x9p" Aug 12 23:42:37.350768 kubelet[3515]: I0812 23:42:37.350644 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8rcb\" (UniqueName: \"kubernetes.io/projected/bd6c9aed-3c52-4cf3-b20a-0312b540cfff-kube-api-access-w8rcb\") pod \"calico-apiserver-865b585bd6-k8x9p\" (UID: \"bd6c9aed-3c52-4cf3-b20a-0312b540cfff\") " pod="calico-apiserver/calico-apiserver-865b585bd6-k8x9p" Aug 12 23:42:37.350768 kubelet[3515]: I0812 23:42:37.350738 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ff9862-d31d-47c3-9c5e-b1b67b525562-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-dbgt6\" (UID: \"09ff9862-d31d-47c3-9c5e-b1b67b525562\") " pod="calico-system/goldmane-768f4c5c69-dbgt6" Aug 12 23:42:37.350873 kubelet[3515]: I0812 23:42:37.350773 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc9efe89-2ccd-45b4-85b7-881c9027351c-whisker-backend-key-pair\") pod \"whisker-5cfb4ccd68-cnj74\" (UID: \"fc9efe89-2ccd-45b4-85b7-881c9027351c\") " pod="calico-system/whisker-5cfb4ccd68-cnj74" Aug 12 23:42:37.350873 kubelet[3515]: I0812 23:42:37.350819 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/09ff9862-d31d-47c3-9c5e-b1b67b525562-goldmane-key-pair\") pod \"goldmane-768f4c5c69-dbgt6\" (UID: \"09ff9862-d31d-47c3-9c5e-b1b67b525562\") " pod="calico-system/goldmane-768f4c5c69-dbgt6" Aug 12 23:42:37.350873 kubelet[3515]: I0812 23:42:37.350856 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn46b\" (UniqueName: \"kubernetes.io/projected/fc9efe89-2ccd-45b4-85b7-881c9027351c-kube-api-access-kn46b\") pod \"whisker-5cfb4ccd68-cnj74\" (UID: \"fc9efe89-2ccd-45b4-85b7-881c9027351c\") " pod="calico-system/whisker-5cfb4ccd68-cnj74" Aug 12 23:42:37.357538 systemd[1]: Created slice kubepods-besteffort-pod09ff9862_d31d_47c3_9c5e_b1b67b525562.slice - libcontainer container kubepods-besteffort-pod09ff9862_d31d_47c3_9c5e_b1b67b525562.slice. Aug 12 23:42:37.524989 containerd[2013]: time="2025-08-12T23:42:37.524905195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mmcwn,Uid:a8b776ad-ce4c-49ef-85b7-5aacc790ac0b,Namespace:kube-system,Attempt:0,}" Aug 12 23:42:37.554939 containerd[2013]: time="2025-08-12T23:42:37.554874980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pmv6t,Uid:6486c795-f806-48bb-bfbf-6140cfb4bdcd,Namespace:kube-system,Attempt:0,}" Aug 12 23:42:37.578263 containerd[2013]: time="2025-08-12T23:42:37.577410860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df96dbd7c-shcc7,Uid:d908bbbe-0216-4d9e-bc2f-95bf8249336c,Namespace:calico-system,Attempt:0,}" Aug 12 23:42:37.600355 containerd[2013]: time="2025-08-12T23:42:37.600215480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b585bd6-k8x9p,Uid:bd6c9aed-3c52-4cf3-b20a-0312b540cfff,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:42:37.628673 containerd[2013]: time="2025-08-12T23:42:37.628613768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b585bd6-h7tzr,Uid:31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:42:37.666722 containerd[2013]: time="2025-08-12T23:42:37.666631016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cfb4ccd68-cnj74,Uid:fc9efe89-2ccd-45b4-85b7-881c9027351c,Namespace:calico-system,Attempt:0,}" Aug 12 23:42:37.681815 containerd[2013]: time="2025-08-12T23:42:37.681682844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-dbgt6,Uid:09ff9862-d31d-47c3-9c5e-b1b67b525562,Namespace:calico-system,Attempt:0,}" Aug 12 23:42:38.048773 containerd[2013]: time="2025-08-12T23:42:38.048512394Z" level=error msg="Failed to destroy network for sandbox \"f0f7ccd5c6507250b510bb9d5dbf2b109c116aeec9498099b76bee00a4c7e4de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.305362 containerd[2013]: time="2025-08-12T23:42:38.303909139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mmcwn,Uid:a8b776ad-ce4c-49ef-85b7-5aacc790ac0b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f7ccd5c6507250b510bb9d5dbf2b109c116aeec9498099b76bee00a4c7e4de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.320623 kubelet[3515]: E0812 23:42:38.320557 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f7ccd5c6507250b510bb9d5dbf2b109c116aeec9498099b76bee00a4c7e4de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.321521 kubelet[3515]: E0812 23:42:38.321372 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f7ccd5c6507250b510bb9d5dbf2b109c116aeec9498099b76bee00a4c7e4de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mmcwn" Aug 12 23:42:38.321521 kubelet[3515]: E0812 23:42:38.321454 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f7ccd5c6507250b510bb9d5dbf2b109c116aeec9498099b76bee00a4c7e4de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mmcwn" Aug 12 23:42:38.322384 kubelet[3515]: E0812 23:42:38.321744 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mmcwn_kube-system(a8b776ad-ce4c-49ef-85b7-5aacc790ac0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mmcwn_kube-system(a8b776ad-ce4c-49ef-85b7-5aacc790ac0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0f7ccd5c6507250b510bb9d5dbf2b109c116aeec9498099b76bee00a4c7e4de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mmcwn" podUID="a8b776ad-ce4c-49ef-85b7-5aacc790ac0b" Aug 12 23:42:38.380910 containerd[2013]: time="2025-08-12T23:42:38.380621912Z" level=error msg="Failed to destroy network for sandbox \"e3e50102700fd8a105b54ccffd33ddcbcad80398267211fa99b4685f707610e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.390485 systemd[1]: run-netns-cni\x2da8870982\x2d46de\x2d770f\x2dc0a3\x2d385ec2fae305.mount: Deactivated successfully. Aug 12 23:42:38.398092 containerd[2013]: time="2025-08-12T23:42:38.397412648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pmv6t,Uid:6486c795-f806-48bb-bfbf-6140cfb4bdcd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e50102700fd8a105b54ccffd33ddcbcad80398267211fa99b4685f707610e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.400323 kubelet[3515]: E0812 23:42:38.398944 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e50102700fd8a105b54ccffd33ddcbcad80398267211fa99b4685f707610e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.400323 kubelet[3515]: E0812 23:42:38.399035 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e50102700fd8a105b54ccffd33ddcbcad80398267211fa99b4685f707610e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pmv6t" Aug 12 23:42:38.400323 kubelet[3515]: E0812 23:42:38.399069 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e50102700fd8a105b54ccffd33ddcbcad80398267211fa99b4685f707610e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pmv6t" Aug 12 23:42:38.400615 kubelet[3515]: E0812 23:42:38.399141 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pmv6t_kube-system(6486c795-f806-48bb-bfbf-6140cfb4bdcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pmv6t_kube-system(6486c795-f806-48bb-bfbf-6140cfb4bdcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3e50102700fd8a105b54ccffd33ddcbcad80398267211fa99b4685f707610e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pmv6t" podUID="6486c795-f806-48bb-bfbf-6140cfb4bdcd" Aug 12 23:42:38.567944 containerd[2013]: time="2025-08-12T23:42:38.567643437Z" level=error msg="Failed to destroy network for sandbox \"8d60b90e31c69aeb79dc947f15ba205d058ff1f6770d92e223a7fbb166702a85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.573949 systemd[1]: run-netns-cni\x2d8655659e\x2deae2\x2d6274\x2de59c\x2de93af28b6d13.mount: Deactivated successfully. Aug 12 23:42:38.580863 containerd[2013]: time="2025-08-12T23:42:38.580360113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b585bd6-h7tzr,Uid:31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d60b90e31c69aeb79dc947f15ba205d058ff1f6770d92e223a7fbb166702a85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.581073 kubelet[3515]: E0812 23:42:38.580673 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d60b90e31c69aeb79dc947f15ba205d058ff1f6770d92e223a7fbb166702a85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.581073 kubelet[3515]: E0812 23:42:38.580749 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d60b90e31c69aeb79dc947f15ba205d058ff1f6770d92e223a7fbb166702a85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-865b585bd6-h7tzr" Aug 12 23:42:38.581073 kubelet[3515]: E0812 23:42:38.580782 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d60b90e31c69aeb79dc947f15ba205d058ff1f6770d92e223a7fbb166702a85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-865b585bd6-h7tzr" Aug 12 23:42:38.583112 kubelet[3515]: E0812 23:42:38.581407 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-865b585bd6-h7tzr_calico-apiserver(31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-865b585bd6-h7tzr_calico-apiserver(31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d60b90e31c69aeb79dc947f15ba205d058ff1f6770d92e223a7fbb166702a85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-865b585bd6-h7tzr" podUID="31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b" Aug 12 23:42:38.588078 containerd[2013]: time="2025-08-12T23:42:38.587927385Z" level=error msg="Failed to destroy network for sandbox \"6e3b744ed0b76153e2d56b486b69be3974bb92481debf6426683d5e8f32588e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.592278 containerd[2013]: time="2025-08-12T23:42:38.591634437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cfb4ccd68-cnj74,Uid:fc9efe89-2ccd-45b4-85b7-881c9027351c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e3b744ed0b76153e2d56b486b69be3974bb92481debf6426683d5e8f32588e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.597511 kubelet[3515]: E0812 23:42:38.593364 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e3b744ed0b76153e2d56b486b69be3974bb92481debf6426683d5e8f32588e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.597511 kubelet[3515]: E0812 23:42:38.596089 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e3b744ed0b76153e2d56b486b69be3974bb92481debf6426683d5e8f32588e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cfb4ccd68-cnj74" Aug 12 23:42:38.597511 kubelet[3515]: E0812 23:42:38.596171 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e3b744ed0b76153e2d56b486b69be3974bb92481debf6426683d5e8f32588e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cfb4ccd68-cnj74" Aug 12 23:42:38.595665 systemd[1]: run-netns-cni\x2de381e309\x2d108f\x2dc92e\x2d5268\x2d944dcea486c7.mount: Deactivated successfully. Aug 12 23:42:38.597944 kubelet[3515]: E0812 23:42:38.596321 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cfb4ccd68-cnj74_calico-system(fc9efe89-2ccd-45b4-85b7-881c9027351c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cfb4ccd68-cnj74_calico-system(fc9efe89-2ccd-45b4-85b7-881c9027351c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e3b744ed0b76153e2d56b486b69be3974bb92481debf6426683d5e8f32588e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cfb4ccd68-cnj74" podUID="fc9efe89-2ccd-45b4-85b7-881c9027351c" Aug 12 23:42:38.603459 containerd[2013]: time="2025-08-12T23:42:38.603374013Z" level=error msg="Failed to destroy network for sandbox \"67a4fc13280480179ab20a9935bab5db5b7516d9d85e6d71e9f3eeb184da1806\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.606763 containerd[2013]: time="2025-08-12T23:42:38.606420729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b585bd6-k8x9p,Uid:bd6c9aed-3c52-4cf3-b20a-0312b540cfff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67a4fc13280480179ab20a9935bab5db5b7516d9d85e6d71e9f3eeb184da1806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.607520 kubelet[3515]: E0812 23:42:38.607449 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67a4fc13280480179ab20a9935bab5db5b7516d9d85e6d71e9f3eeb184da1806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.607823 kubelet[3515]: E0812 23:42:38.607704 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67a4fc13280480179ab20a9935bab5db5b7516d9d85e6d71e9f3eeb184da1806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-865b585bd6-k8x9p" Aug 12 23:42:38.607823 kubelet[3515]: E0812 23:42:38.607773 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67a4fc13280480179ab20a9935bab5db5b7516d9d85e6d71e9f3eeb184da1806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-865b585bd6-k8x9p" Aug 12 23:42:38.608249 kubelet[3515]: E0812 23:42:38.608013 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-865b585bd6-k8x9p_calico-apiserver(bd6c9aed-3c52-4cf3-b20a-0312b540cfff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-865b585bd6-k8x9p_calico-apiserver(bd6c9aed-3c52-4cf3-b20a-0312b540cfff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67a4fc13280480179ab20a9935bab5db5b7516d9d85e6d71e9f3eeb184da1806\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-865b585bd6-k8x9p" podUID="bd6c9aed-3c52-4cf3-b20a-0312b540cfff" Aug 12 23:42:38.616696 containerd[2013]: time="2025-08-12T23:42:38.616630197Z" level=error msg="Failed to destroy network for sandbox \"fffb8fe52fd6d2e29787effd18ab2ae65d32c88c46e57ec614c1a9ef6f2eaf26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.620137 containerd[2013]: time="2025-08-12T23:42:38.619964457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df96dbd7c-shcc7,Uid:d908bbbe-0216-4d9e-bc2f-95bf8249336c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fffb8fe52fd6d2e29787effd18ab2ae65d32c88c46e57ec614c1a9ef6f2eaf26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.620745 kubelet[3515]: E0812 23:42:38.620446 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fffb8fe52fd6d2e29787effd18ab2ae65d32c88c46e57ec614c1a9ef6f2eaf26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.620745 kubelet[3515]: E0812 23:42:38.620529 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fffb8fe52fd6d2e29787effd18ab2ae65d32c88c46e57ec614c1a9ef6f2eaf26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7df96dbd7c-shcc7" Aug 12 23:42:38.620745 kubelet[3515]: E0812 23:42:38.620566 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fffb8fe52fd6d2e29787effd18ab2ae65d32c88c46e57ec614c1a9ef6f2eaf26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7df96dbd7c-shcc7" Aug 12 23:42:38.622299 kubelet[3515]: E0812 23:42:38.621660 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7df96dbd7c-shcc7_calico-system(d908bbbe-0216-4d9e-bc2f-95bf8249336c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7df96dbd7c-shcc7_calico-system(d908bbbe-0216-4d9e-bc2f-95bf8249336c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fffb8fe52fd6d2e29787effd18ab2ae65d32c88c46e57ec614c1a9ef6f2eaf26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7df96dbd7c-shcc7" podUID="d908bbbe-0216-4d9e-bc2f-95bf8249336c" Aug 12 23:42:38.625110 containerd[2013]: time="2025-08-12T23:42:38.624583305Z" level=error msg="Failed to destroy network for sandbox \"69dd0bcbe5152e20727a8fcd9db06a0d743853c04f167f3f37697a7a7e9947ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.627742 containerd[2013]: time="2025-08-12T23:42:38.627624765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-dbgt6,Uid:09ff9862-d31d-47c3-9c5e-b1b67b525562,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69dd0bcbe5152e20727a8fcd9db06a0d743853c04f167f3f37697a7a7e9947ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.628157 kubelet[3515]: E0812 23:42:38.628026 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69dd0bcbe5152e20727a8fcd9db06a0d743853c04f167f3f37697a7a7e9947ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.628157 kubelet[3515]: E0812 23:42:38.628132 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69dd0bcbe5152e20727a8fcd9db06a0d743853c04f167f3f37697a7a7e9947ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-dbgt6" Aug 12 23:42:38.628826 kubelet[3515]: E0812 23:42:38.628194 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69dd0bcbe5152e20727a8fcd9db06a0d743853c04f167f3f37697a7a7e9947ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-dbgt6" Aug 12 23:42:38.628826 kubelet[3515]: E0812 23:42:38.628344 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-dbgt6_calico-system(09ff9862-d31d-47c3-9c5e-b1b67b525562)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-dbgt6_calico-system(09ff9862-d31d-47c3-9c5e-b1b67b525562)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69dd0bcbe5152e20727a8fcd9db06a0d743853c04f167f3f37697a7a7e9947ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-dbgt6" podUID="09ff9862-d31d-47c3-9c5e-b1b67b525562" Aug 12 23:42:38.708845 systemd[1]: Created slice kubepods-besteffort-podf465d6b2_8aef_4866_ba3f_bfdd97688b16.slice - libcontainer container kubepods-besteffort-podf465d6b2_8aef_4866_ba3f_bfdd97688b16.slice. Aug 12 23:42:38.714139 containerd[2013]: time="2025-08-12T23:42:38.714035289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sh6l,Uid:f465d6b2-8aef-4866-ba3f-bfdd97688b16,Namespace:calico-system,Attempt:0,}" Aug 12 23:42:38.803676 containerd[2013]: time="2025-08-12T23:42:38.803588446Z" level=error msg="Failed to destroy network for sandbox \"ab3b1f5f83900e30ddc0ea8143d2966ff9f5defa913d96257693138de6ae42e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.805857 containerd[2013]: time="2025-08-12T23:42:38.805792714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sh6l,Uid:f465d6b2-8aef-4866-ba3f-bfdd97688b16,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3b1f5f83900e30ddc0ea8143d2966ff9f5defa913d96257693138de6ae42e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.806327 kubelet[3515]: E0812 23:42:38.806136 3515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3b1f5f83900e30ddc0ea8143d2966ff9f5defa913d96257693138de6ae42e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:42:38.806327 kubelet[3515]: E0812 23:42:38.806210 3515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3b1f5f83900e30ddc0ea8143d2966ff9f5defa913d96257693138de6ae42e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8sh6l" Aug 12 23:42:38.806327 kubelet[3515]: E0812 23:42:38.806299 3515 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3b1f5f83900e30ddc0ea8143d2966ff9f5defa913d96257693138de6ae42e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8sh6l" Aug 12 23:42:38.806539 kubelet[3515]: E0812 23:42:38.806368 3515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8sh6l_calico-system(f465d6b2-8aef-4866-ba3f-bfdd97688b16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8sh6l_calico-system(f465d6b2-8aef-4866-ba3f-bfdd97688b16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab3b1f5f83900e30ddc0ea8143d2966ff9f5defa913d96257693138de6ae42e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8sh6l" podUID="f465d6b2-8aef-4866-ba3f-bfdd97688b16" Aug 12 23:42:38.989357 containerd[2013]: time="2025-08-12T23:42:38.987708803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 12 23:42:39.107920 systemd[1]: run-netns-cni\x2d956aa8b5\x2d751e\x2d387b\x2d9e49\x2db101b35d455f.mount: Deactivated successfully. Aug 12 23:42:39.108326 systemd[1]: run-netns-cni\x2df7309c29\x2d999e\x2d74ce\x2dd058\x2ddb9784a0061e.mount: Deactivated successfully. Aug 12 23:42:39.108590 systemd[1]: run-netns-cni\x2d65e1b56f\x2df9e0\x2d2d51\x2d44d6\x2df99a80afe339.mount: Deactivated successfully. Aug 12 23:42:45.106837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1204122038.mount: Deactivated successfully. Aug 12 23:42:45.173900 containerd[2013]: time="2025-08-12T23:42:45.173776153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:45.175755 containerd[2013]: time="2025-08-12T23:42:45.175650217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 12 23:42:45.178371 containerd[2013]: time="2025-08-12T23:42:45.178283833Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:45.182578 containerd[2013]: time="2025-08-12T23:42:45.182496853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:45.184666 containerd[2013]: time="2025-08-12T23:42:45.184199509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.194649006s" Aug 12 23:42:45.184666 containerd[2013]: time="2025-08-12T23:42:45.184305757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 12 23:42:45.220710 containerd[2013]: time="2025-08-12T23:42:45.220580642Z" level=info msg="CreateContainer within sandbox \"905523c23e5a5fb5f4e31b48aa24cf40d6fb8655e27a3c501eac62b4cf382e90\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 12 23:42:45.243269 containerd[2013]: time="2025-08-12T23:42:45.241994414Z" level=info msg="Container 425eb6361ef080a83462f52b6d58b492241f5cdbfe6ce1c7fc9d75a2676d753b: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:42:45.252321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3612925220.mount: Deactivated successfully. Aug 12 23:42:45.292156 containerd[2013]: time="2025-08-12T23:42:45.292078946Z" level=info msg="CreateContainer within sandbox \"905523c23e5a5fb5f4e31b48aa24cf40d6fb8655e27a3c501eac62b4cf382e90\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"425eb6361ef080a83462f52b6d58b492241f5cdbfe6ce1c7fc9d75a2676d753b\"" Aug 12 23:42:45.294251 containerd[2013]: time="2025-08-12T23:42:45.294150458Z" level=info msg="StartContainer for \"425eb6361ef080a83462f52b6d58b492241f5cdbfe6ce1c7fc9d75a2676d753b\"" Aug 12 23:42:45.297348 containerd[2013]: time="2025-08-12T23:42:45.297269438Z" level=info msg="connecting to shim 425eb6361ef080a83462f52b6d58b492241f5cdbfe6ce1c7fc9d75a2676d753b" address="unix:///run/containerd/s/747e430f5d68dfd472e48d1c592cfc91005a82b957f70eba3c4b9cb2d80c457f" protocol=ttrpc version=3 Aug 12 23:42:45.371602 systemd[1]: Started cri-containerd-425eb6361ef080a83462f52b6d58b492241f5cdbfe6ce1c7fc9d75a2676d753b.scope - libcontainer container 425eb6361ef080a83462f52b6d58b492241f5cdbfe6ce1c7fc9d75a2676d753b. Aug 12 23:42:45.455702 containerd[2013]: time="2025-08-12T23:42:45.455551983Z" level=info msg="StartContainer for \"425eb6361ef080a83462f52b6d58b492241f5cdbfe6ce1c7fc9d75a2676d753b\" returns successfully" Aug 12 23:42:45.750573 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 12 23:42:45.750761 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 12 23:42:46.023546 kubelet[3515]: I0812 23:42:46.022833 3515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc9efe89-2ccd-45b4-85b7-881c9027351c-whisker-ca-bundle\") pod \"fc9efe89-2ccd-45b4-85b7-881c9027351c\" (UID: \"fc9efe89-2ccd-45b4-85b7-881c9027351c\") " Aug 12 23:42:46.023546 kubelet[3515]: I0812 23:42:46.023335 3515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn46b\" (UniqueName: \"kubernetes.io/projected/fc9efe89-2ccd-45b4-85b7-881c9027351c-kube-api-access-kn46b\") pod \"fc9efe89-2ccd-45b4-85b7-881c9027351c\" (UID: \"fc9efe89-2ccd-45b4-85b7-881c9027351c\") " Aug 12 23:42:46.023546 kubelet[3515]: I0812 23:42:46.023468 3515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc9efe89-2ccd-45b4-85b7-881c9027351c-whisker-backend-key-pair\") pod \"fc9efe89-2ccd-45b4-85b7-881c9027351c\" (UID: \"fc9efe89-2ccd-45b4-85b7-881c9027351c\") " Aug 12 23:42:46.023546 kubelet[3515]: I0812 23:42:46.023544 3515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9efe89-2ccd-45b4-85b7-881c9027351c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fc9efe89-2ccd-45b4-85b7-881c9027351c" (UID: "fc9efe89-2ccd-45b4-85b7-881c9027351c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 12 23:42:46.030364 kubelet[3515]: I0812 23:42:46.029946 3515 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc9efe89-2ccd-45b4-85b7-881c9027351c-whisker-ca-bundle\") on node \"ip-172-31-28-88\" DevicePath \"\"" Aug 12 23:42:46.040857 kubelet[3515]: I0812 23:42:46.040409 3515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9efe89-2ccd-45b4-85b7-881c9027351c-kube-api-access-kn46b" (OuterVolumeSpecName: "kube-api-access-kn46b") pod "fc9efe89-2ccd-45b4-85b7-881c9027351c" (UID: "fc9efe89-2ccd-45b4-85b7-881c9027351c"). InnerVolumeSpecName "kube-api-access-kn46b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 12 23:42:46.041935 kubelet[3515]: I0812 23:42:46.040980 3515 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9efe89-2ccd-45b4-85b7-881c9027351c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fc9efe89-2ccd-45b4-85b7-881c9027351c" (UID: "fc9efe89-2ccd-45b4-85b7-881c9027351c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 12 23:42:46.071446 systemd[1]: Removed slice kubepods-besteffort-podfc9efe89_2ccd_45b4_85b7_881c9027351c.slice - libcontainer container kubepods-besteffort-podfc9efe89_2ccd_45b4_85b7_881c9027351c.slice. Aug 12 23:42:46.111440 systemd[1]: var-lib-kubelet-pods-fc9efe89\x2d2ccd\x2d45b4\x2d85b7\x2d881c9027351c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkn46b.mount: Deactivated successfully. Aug 12 23:42:46.112134 systemd[1]: var-lib-kubelet-pods-fc9efe89\x2d2ccd\x2d45b4\x2d85b7\x2d881c9027351c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 12 23:42:46.131704 kubelet[3515]: I0812 23:42:46.130821 3515 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kn46b\" (UniqueName: \"kubernetes.io/projected/fc9efe89-2ccd-45b4-85b7-881c9027351c-kube-api-access-kn46b\") on node \"ip-172-31-28-88\" DevicePath \"\"" Aug 12 23:42:46.131704 kubelet[3515]: I0812 23:42:46.130869 3515 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc9efe89-2ccd-45b4-85b7-881c9027351c-whisker-backend-key-pair\") on node \"ip-172-31-28-88\" DevicePath \"\"" Aug 12 23:42:46.181278 kubelet[3515]: I0812 23:42:46.180858 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j5klq" podStartSLOduration=1.8105937970000001 podStartE2EDuration="18.180828722s" podCreationTimestamp="2025-08-12 23:42:28 +0000 UTC" firstStartedPulling="2025-08-12 23:42:28.815541948 +0000 UTC m=+33.383720615" lastFinishedPulling="2025-08-12 23:42:45.185776885 +0000 UTC m=+49.753955540" observedRunningTime="2025-08-12 23:42:46.14094401 +0000 UTC m=+50.709122701" watchObservedRunningTime="2025-08-12 23:42:46.180828722 +0000 UTC m=+50.749007377" Aug 12 23:42:46.289302 systemd[1]: Created slice kubepods-besteffort-pode80c5fbc_edf6_42d6_8d61_715fd9d4491c.slice - libcontainer container kubepods-besteffort-pode80c5fbc_edf6_42d6_8d61_715fd9d4491c.slice. Aug 12 23:42:46.332123 kubelet[3515]: I0812 23:42:46.331956 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e80c5fbc-edf6-42d6-8d61-715fd9d4491c-whisker-ca-bundle\") pod \"whisker-5fc66f597b-jxx8s\" (UID: \"e80c5fbc-edf6-42d6-8d61-715fd9d4491c\") " pod="calico-system/whisker-5fc66f597b-jxx8s" Aug 12 23:42:46.333537 kubelet[3515]: I0812 23:42:46.333287 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e80c5fbc-edf6-42d6-8d61-715fd9d4491c-whisker-backend-key-pair\") pod \"whisker-5fc66f597b-jxx8s\" (UID: \"e80c5fbc-edf6-42d6-8d61-715fd9d4491c\") " pod="calico-system/whisker-5fc66f597b-jxx8s" Aug 12 23:42:46.333714 kubelet[3515]: I0812 23:42:46.333665 3515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55x58\" (UniqueName: \"kubernetes.io/projected/e80c5fbc-edf6-42d6-8d61-715fd9d4491c-kube-api-access-55x58\") pod \"whisker-5fc66f597b-jxx8s\" (UID: \"e80c5fbc-edf6-42d6-8d61-715fd9d4491c\") " pod="calico-system/whisker-5fc66f597b-jxx8s" Aug 12 23:42:46.496448 containerd[2013]: time="2025-08-12T23:42:46.495145600Z" level=info msg="TaskExit event in podsandbox handler container_id:\"425eb6361ef080a83462f52b6d58b492241f5cdbfe6ce1c7fc9d75a2676d753b\" id:\"f4b3503e5c7ff31aefd914ee708fc57962a51bb50020118f215b977a71e4783c\" pid:4534 exit_status:1 exited_at:{seconds:1755042166 nanos:494670340}" Aug 12 23:42:46.602115 containerd[2013]: time="2025-08-12T23:42:46.601631009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fc66f597b-jxx8s,Uid:e80c5fbc-edf6-42d6-8d61-715fd9d4491c,Namespace:calico-system,Attempt:0,}" Aug 12 23:42:46.996790 (udev-worker)[4505]: Network interface NamePolicy= disabled on kernel command line. Aug 12 23:42:47.000333 systemd-networkd[1854]: cali9ac78e30238: Link UP Aug 12 23:42:47.001785 systemd-networkd[1854]: cali9ac78e30238: Gained carrier Aug 12 23:42:47.035620 containerd[2013]: 2025-08-12 23:42:46.673 [INFO][4548] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:42:47.035620 containerd[2013]: 2025-08-12 23:42:46.781 [INFO][4548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0 whisker-5fc66f597b- calico-system e80c5fbc-edf6-42d6-8d61-715fd9d4491c 894 0 2025-08-12 23:42:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5fc66f597b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-88 whisker-5fc66f597b-jxx8s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9ac78e30238 [] [] }} ContainerID="42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" Namespace="calico-system" Pod="whisker-5fc66f597b-jxx8s" WorkloadEndpoint="ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-" Aug 12 23:42:47.035620 containerd[2013]: 2025-08-12 23:42:46.781 [INFO][4548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" Namespace="calico-system" Pod="whisker-5fc66f597b-jxx8s" WorkloadEndpoint="ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0" Aug 12 23:42:47.035620 containerd[2013]: 2025-08-12 23:42:46.908 [INFO][4560] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" HandleID="k8s-pod-network.42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" Workload="ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0" Aug 12 23:42:47.036009 containerd[2013]: 2025-08-12 23:42:46.908 [INFO][4560] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" HandleID="k8s-pod-network.42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" Workload="ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000353c50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-88", "pod":"whisker-5fc66f597b-jxx8s", "timestamp":"2025-08-12 23:42:46.908176506 +0000 UTC"}, Hostname:"ip-172-31-28-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:42:47.036009 containerd[2013]: 2025-08-12 23:42:46.908 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:42:47.036009 containerd[2013]: 2025-08-12 23:42:46.908 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:42:47.036009 containerd[2013]: 2025-08-12 23:42:46.908 [INFO][4560] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-88' Aug 12 23:42:47.036009 containerd[2013]: 2025-08-12 23:42:46.926 [INFO][4560] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" host="ip-172-31-28-88" Aug 12 23:42:47.036009 containerd[2013]: 2025-08-12 23:42:46.937 [INFO][4560] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-88" Aug 12 23:42:47.036009 containerd[2013]: 2025-08-12 23:42:46.946 [INFO][4560] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:47.036009 containerd[2013]: 2025-08-12 23:42:46.951 [INFO][4560] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:47.036009 containerd[2013]: 2025-08-12 23:42:46.955 [INFO][4560] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:47.036009 containerd[2013]: 2025-08-12 23:42:46.955 [INFO][4560] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" host="ip-172-31-28-88" Aug 12 23:42:47.036771 containerd[2013]: 2025-08-12 23:42:46.958 [INFO][4560] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d Aug 12 23:42:47.036771 containerd[2013]: 2025-08-12 23:42:46.966 [INFO][4560] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" host="ip-172-31-28-88" Aug 12 23:42:47.036771 containerd[2013]: 2025-08-12 23:42:46.976 [INFO][4560] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.1/26] block=192.168.69.0/26 handle="k8s-pod-network.42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" host="ip-172-31-28-88" Aug 12 23:42:47.036771 containerd[2013]: 2025-08-12 23:42:46.976 [INFO][4560] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.1/26] handle="k8s-pod-network.42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" host="ip-172-31-28-88" Aug 12 23:42:47.036771 containerd[2013]: 2025-08-12 23:42:46.976 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:42:47.036771 containerd[2013]: 2025-08-12 23:42:46.976 [INFO][4560] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.1/26] IPv6=[] ContainerID="42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" HandleID="k8s-pod-network.42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" Workload="ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0" Aug 12 23:42:47.037056 containerd[2013]: 2025-08-12 23:42:46.984 [INFO][4548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" Namespace="calico-system" Pod="whisker-5fc66f597b-jxx8s" WorkloadEndpoint="ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0", GenerateName:"whisker-5fc66f597b-", Namespace:"calico-system", SelfLink:"", UID:"e80c5fbc-edf6-42d6-8d61-715fd9d4491c", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fc66f597b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"", Pod:"whisker-5fc66f597b-jxx8s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9ac78e30238", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:47.037056 containerd[2013]: 2025-08-12 23:42:46.984 [INFO][4548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.1/32] ContainerID="42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" Namespace="calico-system" Pod="whisker-5fc66f597b-jxx8s" WorkloadEndpoint="ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0" Aug 12 23:42:47.037294 containerd[2013]: 2025-08-12 23:42:46.984 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ac78e30238 ContainerID="42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" Namespace="calico-system" Pod="whisker-5fc66f597b-jxx8s" WorkloadEndpoint="ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0" Aug 12 23:42:47.037294 containerd[2013]: 2025-08-12 23:42:47.003 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" Namespace="calico-system" Pod="whisker-5fc66f597b-jxx8s" WorkloadEndpoint="ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0" Aug 12 23:42:47.037406 containerd[2013]: 2025-08-12 23:42:47.004 [INFO][4548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" Namespace="calico-system" Pod="whisker-5fc66f597b-jxx8s" WorkloadEndpoint="ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0", GenerateName:"whisker-5fc66f597b-", Namespace:"calico-system", SelfLink:"", UID:"e80c5fbc-edf6-42d6-8d61-715fd9d4491c", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fc66f597b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d", Pod:"whisker-5fc66f597b-jxx8s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9ac78e30238", MAC:"fe:cd:ea:bb:6d:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:47.037524 containerd[2013]: 2025-08-12 23:42:47.028 [INFO][4548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" Namespace="calico-system" Pod="whisker-5fc66f597b-jxx8s" WorkloadEndpoint="ip--172--31--28--88-k8s-whisker--5fc66f597b--jxx8s-eth0" Aug 12 23:42:47.126665 containerd[2013]: time="2025-08-12T23:42:47.126468543Z" level=info msg="connecting to shim 42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d" address="unix:///run/containerd/s/10958d8feb24fa8b3080c353cf88e37d51c86677337c1a9d1a360a18a350d263" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:42:47.199806 systemd[1]: Started cri-containerd-42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d.scope - libcontainer container 42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d. Aug 12 23:42:47.261757 containerd[2013]: time="2025-08-12T23:42:47.260501164Z" level=info msg="TaskExit event in podsandbox handler container_id:\"425eb6361ef080a83462f52b6d58b492241f5cdbfe6ce1c7fc9d75a2676d753b\" id:\"39b1103dea717ab1196f50d58f1ae99979001f8ebe1d6bc71ba7359abad9dc14\" pid:4598 exit_status:1 exited_at:{seconds:1755042167 nanos:259948036}" Aug 12 23:42:47.301776 containerd[2013]: time="2025-08-12T23:42:47.301711240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fc66f597b-jxx8s,Uid:e80c5fbc-edf6-42d6-8d61-715fd9d4491c,Namespace:calico-system,Attempt:0,} returns sandbox id \"42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d\"" Aug 12 23:42:47.306130 containerd[2013]: time="2025-08-12T23:42:47.306067504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 12 23:42:47.702125 kubelet[3515]: I0812 23:42:47.701961 3515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9efe89-2ccd-45b4-85b7-881c9027351c" path="/var/lib/kubelet/pods/fc9efe89-2ccd-45b4-85b7-881c9027351c/volumes" Aug 12 23:42:48.230507 systemd-networkd[1854]: cali9ac78e30238: Gained IPv6LL Aug 12 23:42:49.422421 containerd[2013]: time="2025-08-12T23:42:49.421078363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:49.423329 containerd[2013]: time="2025-08-12T23:42:49.423214831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 12 23:42:49.426976 containerd[2013]: time="2025-08-12T23:42:49.426912115Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:49.439146 systemd-networkd[1854]: vxlan.calico: Link UP Aug 12 23:42:49.440155 systemd-networkd[1854]: vxlan.calico: Gained carrier Aug 12 23:42:49.441154 containerd[2013]: time="2025-08-12T23:42:49.438921967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:49.450303 containerd[2013]: time="2025-08-12T23:42:49.449352199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 2.143212923s" Aug 12 23:42:49.450303 containerd[2013]: time="2025-08-12T23:42:49.449643715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 12 23:42:49.470901 containerd[2013]: time="2025-08-12T23:42:49.470679295Z" level=info msg="CreateContainer within sandbox \"42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 12 23:42:49.496278 containerd[2013]: time="2025-08-12T23:42:49.495980011Z" level=info msg="Container 37f599f3883e0fc75785d4cf52b9fd6be95a01d01c0835fa92cc9a74bcbd63c8: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:42:49.526627 (udev-worker)[4508]: Network interface NamePolicy= disabled on kernel command line. Aug 12 23:42:49.535821 containerd[2013]: time="2025-08-12T23:42:49.535574371Z" level=info msg="CreateContainer within sandbox \"42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"37f599f3883e0fc75785d4cf52b9fd6be95a01d01c0835fa92cc9a74bcbd63c8\"" Aug 12 23:42:49.537924 containerd[2013]: time="2025-08-12T23:42:49.536771851Z" level=info msg="StartContainer for \"37f599f3883e0fc75785d4cf52b9fd6be95a01d01c0835fa92cc9a74bcbd63c8\"" Aug 12 23:42:49.548573 containerd[2013]: time="2025-08-12T23:42:49.548418415Z" level=info msg="connecting to shim 37f599f3883e0fc75785d4cf52b9fd6be95a01d01c0835fa92cc9a74bcbd63c8" address="unix:///run/containerd/s/10958d8feb24fa8b3080c353cf88e37d51c86677337c1a9d1a360a18a350d263" protocol=ttrpc version=3 Aug 12 23:42:49.614857 systemd[1]: Started cri-containerd-37f599f3883e0fc75785d4cf52b9fd6be95a01d01c0835fa92cc9a74bcbd63c8.scope - libcontainer container 37f599f3883e0fc75785d4cf52b9fd6be95a01d01c0835fa92cc9a74bcbd63c8. Aug 12 23:42:49.699465 containerd[2013]: time="2025-08-12T23:42:49.699082784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pmv6t,Uid:6486c795-f806-48bb-bfbf-6140cfb4bdcd,Namespace:kube-system,Attempt:0,}" Aug 12 23:42:49.760857 containerd[2013]: time="2025-08-12T23:42:49.760722884Z" level=info msg="StartContainer for \"37f599f3883e0fc75785d4cf52b9fd6be95a01d01c0835fa92cc9a74bcbd63c8\" returns successfully" Aug 12 23:42:49.764600 containerd[2013]: time="2025-08-12T23:42:49.764324072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 12 23:42:50.001637 systemd-networkd[1854]: calif45aa81c233: Link UP Aug 12 23:42:50.006351 systemd-networkd[1854]: calif45aa81c233: Gained carrier Aug 12 23:42:50.045581 containerd[2013]: 2025-08-12 23:42:49.847 [INFO][4845] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0 coredns-668d6bf9bc- kube-system 6486c795-f806-48bb-bfbf-6140cfb4bdcd 832 0 2025-08-12 23:42:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-88 coredns-668d6bf9bc-pmv6t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif45aa81c233 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pmv6t" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-" Aug 12 23:42:50.045581 containerd[2013]: 2025-08-12 23:42:49.848 [INFO][4845] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pmv6t" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0" Aug 12 23:42:50.045581 containerd[2013]: 2025-08-12 23:42:49.903 [INFO][4865] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" HandleID="k8s-pod-network.34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" Workload="ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0" Aug 12 23:42:50.046465 containerd[2013]: 2025-08-12 23:42:49.904 [INFO][4865] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" HandleID="k8s-pod-network.34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" Workload="ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b6d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-88", "pod":"coredns-668d6bf9bc-pmv6t", "timestamp":"2025-08-12 23:42:49.903685989 +0000 UTC"}, Hostname:"ip-172-31-28-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:42:50.046465 containerd[2013]: 2025-08-12 23:42:49.904 [INFO][4865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:42:50.046465 containerd[2013]: 2025-08-12 23:42:49.904 [INFO][4865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:42:50.046465 containerd[2013]: 2025-08-12 23:42:49.904 [INFO][4865] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-88' Aug 12 23:42:50.046465 containerd[2013]: 2025-08-12 23:42:49.923 [INFO][4865] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" host="ip-172-31-28-88" Aug 12 23:42:50.046465 containerd[2013]: 2025-08-12 23:42:49.932 [INFO][4865] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-88" Aug 12 23:42:50.046465 containerd[2013]: 2025-08-12 23:42:49.942 [INFO][4865] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:50.046465 containerd[2013]: 2025-08-12 23:42:49.945 [INFO][4865] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:50.046465 containerd[2013]: 2025-08-12 23:42:49.954 [INFO][4865] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:50.046465 containerd[2013]: 2025-08-12 23:42:49.955 [INFO][4865] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" host="ip-172-31-28-88" Aug 12 23:42:50.046944 containerd[2013]: 2025-08-12 23:42:49.958 [INFO][4865] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b Aug 12 23:42:50.046944 containerd[2013]: 2025-08-12 23:42:49.966 [INFO][4865] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" host="ip-172-31-28-88" Aug 12 23:42:50.046944 containerd[2013]: 2025-08-12 23:42:49.987 [INFO][4865] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.2/26] block=192.168.69.0/26 handle="k8s-pod-network.34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" host="ip-172-31-28-88" Aug 12 23:42:50.046944 containerd[2013]: 2025-08-12 23:42:49.987 [INFO][4865] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.2/26] handle="k8s-pod-network.34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" host="ip-172-31-28-88" Aug 12 23:42:50.046944 containerd[2013]: 2025-08-12 23:42:49.987 [INFO][4865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:42:50.046944 containerd[2013]: 2025-08-12 23:42:49.987 [INFO][4865] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.2/26] IPv6=[] ContainerID="34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" HandleID="k8s-pod-network.34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" Workload="ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0" Aug 12 23:42:50.048045 containerd[2013]: 2025-08-12 23:42:49.991 [INFO][4845] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pmv6t" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6486c795-f806-48bb-bfbf-6140cfb4bdcd", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"", Pod:"coredns-668d6bf9bc-pmv6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif45aa81c233", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:50.048045 containerd[2013]: 2025-08-12 23:42:49.991 [INFO][4845] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.2/32] ContainerID="34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pmv6t" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0" Aug 12 23:42:50.048045 containerd[2013]: 2025-08-12 23:42:49.992 [INFO][4845] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif45aa81c233 ContainerID="34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pmv6t" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0" Aug 12 23:42:50.048045 containerd[2013]: 2025-08-12 23:42:50.008 [INFO][4845] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pmv6t" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0" Aug 12 23:42:50.048045 containerd[2013]: 2025-08-12 23:42:50.010 [INFO][4845] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pmv6t" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6486c795-f806-48bb-bfbf-6140cfb4bdcd", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b", Pod:"coredns-668d6bf9bc-pmv6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif45aa81c233", MAC:"aa:7d:7d:09:ce:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:50.048045 containerd[2013]: 2025-08-12 23:42:50.035 [INFO][4845] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" Namespace="kube-system" Pod="coredns-668d6bf9bc-pmv6t" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--pmv6t-eth0" Aug 12 23:42:50.128803 containerd[2013]: time="2025-08-12T23:42:50.128720562Z" level=info msg="connecting to shim 34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b" address="unix:///run/containerd/s/eddb430fe7b96e149a5c18137ee702fbfb9e9fc4193658a73a9597ec8eb2496b" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:42:50.200554 systemd[1]: Started cri-containerd-34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b.scope - libcontainer container 34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b. Aug 12 23:42:50.341064 containerd[2013]: time="2025-08-12T23:42:50.340282927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pmv6t,Uid:6486c795-f806-48bb-bfbf-6140cfb4bdcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b\"" Aug 12 23:42:50.349627 containerd[2013]: time="2025-08-12T23:42:50.349519543Z" level=info msg="CreateContainer within sandbox \"34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:42:50.377528 containerd[2013]: time="2025-08-12T23:42:50.377209579Z" level=info msg="Container e57b03134c134aeed9cc26bd604040f85032062514013d8fa46c30bcd51167c4: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:42:50.393097 containerd[2013]: time="2025-08-12T23:42:50.392915719Z" level=info msg="CreateContainer within sandbox \"34f14700b37db9c0ce5566fd910e6d82cb37ae9c914e7bab9638e213ce5ff72b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e57b03134c134aeed9cc26bd604040f85032062514013d8fa46c30bcd51167c4\"" Aug 12 23:42:50.396496 containerd[2013]: time="2025-08-12T23:42:50.394522207Z" level=info msg="StartContainer for \"e57b03134c134aeed9cc26bd604040f85032062514013d8fa46c30bcd51167c4\"" Aug 12 23:42:50.398893 containerd[2013]: time="2025-08-12T23:42:50.398625715Z" level=info msg="connecting to shim e57b03134c134aeed9cc26bd604040f85032062514013d8fa46c30bcd51167c4" address="unix:///run/containerd/s/eddb430fe7b96e149a5c18137ee702fbfb9e9fc4193658a73a9597ec8eb2496b" protocol=ttrpc version=3 Aug 12 23:42:50.451947 systemd[1]: Started cri-containerd-e57b03134c134aeed9cc26bd604040f85032062514013d8fa46c30bcd51167c4.scope - libcontainer container e57b03134c134aeed9cc26bd604040f85032062514013d8fa46c30bcd51167c4. Aug 12 23:42:50.557433 containerd[2013]: time="2025-08-12T23:42:50.557288432Z" level=info msg="StartContainer for \"e57b03134c134aeed9cc26bd604040f85032062514013d8fa46c30bcd51167c4\" returns successfully" Aug 12 23:42:50.698689 containerd[2013]: time="2025-08-12T23:42:50.698126673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sh6l,Uid:f465d6b2-8aef-4866-ba3f-bfdd97688b16,Namespace:calico-system,Attempt:0,}" Aug 12 23:42:50.934710 systemd-networkd[1854]: calia96a8db9348: Link UP Aug 12 23:42:50.936823 systemd-networkd[1854]: calia96a8db9348: Gained carrier Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.797 [INFO][4995] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0 csi-node-driver- calico-system f465d6b2-8aef-4866-ba3f-bfdd97688b16 727 0 2025-08-12 23:42:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-88 csi-node-driver-8sh6l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia96a8db9348 [] [] }} ContainerID="41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" Namespace="calico-system" Pod="csi-node-driver-8sh6l" WorkloadEndpoint="ip--172--31--28--88-k8s-csi--node--driver--8sh6l-" Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.797 [INFO][4995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" Namespace="calico-system" Pod="csi-node-driver-8sh6l" WorkloadEndpoint="ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0" Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.845 [INFO][5006] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" HandleID="k8s-pod-network.41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" Workload="ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0" Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.847 [INFO][5006] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" HandleID="k8s-pod-network.41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" Workload="ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b640), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-88", "pod":"csi-node-driver-8sh6l", "timestamp":"2025-08-12 23:42:50.845619418 +0000 UTC"}, Hostname:"ip-172-31-28-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.847 [INFO][5006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.847 [INFO][5006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.847 [INFO][5006] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-88' Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.864 [INFO][5006] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" host="ip-172-31-28-88" Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.875 [INFO][5006] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-88" Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.885 [INFO][5006] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.889 [INFO][5006] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.895 [INFO][5006] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.895 [INFO][5006] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" host="ip-172-31-28-88" Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.898 [INFO][5006] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.908 [INFO][5006] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" host="ip-172-31-28-88" Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.919 [INFO][5006] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.3/26] block=192.168.69.0/26 handle="k8s-pod-network.41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" host="ip-172-31-28-88" Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.919 [INFO][5006] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.3/26] handle="k8s-pod-network.41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" host="ip-172-31-28-88" Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.919 [INFO][5006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:42:50.968037 containerd[2013]: 2025-08-12 23:42:50.920 [INFO][5006] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.3/26] IPv6=[] ContainerID="41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" HandleID="k8s-pod-network.41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" Workload="ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0" Aug 12 23:42:50.971645 containerd[2013]: 2025-08-12 23:42:50.924 [INFO][4995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" Namespace="calico-system" Pod="csi-node-driver-8sh6l" WorkloadEndpoint="ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f465d6b2-8aef-4866-ba3f-bfdd97688b16", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"", Pod:"csi-node-driver-8sh6l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia96a8db9348", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:50.971645 containerd[2013]: 2025-08-12 23:42:50.924 [INFO][4995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.3/32] ContainerID="41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" Namespace="calico-system" Pod="csi-node-driver-8sh6l" WorkloadEndpoint="ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0" Aug 12 23:42:50.971645 containerd[2013]: 2025-08-12 23:42:50.924 [INFO][4995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia96a8db9348 ContainerID="41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" Namespace="calico-system" Pod="csi-node-driver-8sh6l" WorkloadEndpoint="ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0" Aug 12 23:42:50.971645 containerd[2013]: 2025-08-12 23:42:50.939 [INFO][4995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" Namespace="calico-system" Pod="csi-node-driver-8sh6l" WorkloadEndpoint="ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0" Aug 12 23:42:50.971645 containerd[2013]: 2025-08-12 23:42:50.940 [INFO][4995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" Namespace="calico-system" Pod="csi-node-driver-8sh6l" WorkloadEndpoint="ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f465d6b2-8aef-4866-ba3f-bfdd97688b16", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa", Pod:"csi-node-driver-8sh6l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia96a8db9348", MAC:"0e:ad:e2:33:8b:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:50.971645 containerd[2013]: 2025-08-12 23:42:50.957 [INFO][4995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" Namespace="calico-system" Pod="csi-node-driver-8sh6l" WorkloadEndpoint="ip--172--31--28--88-k8s-csi--node--driver--8sh6l-eth0" Aug 12 23:42:50.982492 systemd-networkd[1854]: vxlan.calico: Gained IPv6LL Aug 12 23:42:51.025206 containerd[2013]: time="2025-08-12T23:42:51.025074102Z" level=info msg="connecting to shim 41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa" address="unix:///run/containerd/s/3c4772cfa64935568437f082ba44063be69ab722aae37ad6496c2daf2543bded" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:42:51.083993 systemd[1]: Started cri-containerd-41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa.scope - libcontainer container 41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa. Aug 12 23:42:51.126426 kubelet[3515]: I0812 23:42:51.124958 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pmv6t" podStartSLOduration=49.124935103 podStartE2EDuration="49.124935103s" podCreationTimestamp="2025-08-12 23:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:42:51.122944507 +0000 UTC m=+55.691123174" watchObservedRunningTime="2025-08-12 23:42:51.124935103 +0000 UTC m=+55.693113770" Aug 12 23:42:51.276566 containerd[2013]: time="2025-08-12T23:42:51.275475992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sh6l,Uid:f465d6b2-8aef-4866-ba3f-bfdd97688b16,Namespace:calico-system,Attempt:0,} returns sandbox id \"41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa\"" Aug 12 23:42:51.302546 systemd-networkd[1854]: calif45aa81c233: Gained IPv6LL Aug 12 23:42:52.071409 systemd-networkd[1854]: calia96a8db9348: Gained IPv6LL Aug 12 23:42:52.584777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017152774.mount: Deactivated successfully. Aug 12 23:42:52.611987 containerd[2013]: time="2025-08-12T23:42:52.611927446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:52.613306 containerd[2013]: time="2025-08-12T23:42:52.612852298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 12 23:42:52.614628 containerd[2013]: time="2025-08-12T23:42:52.614525086Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:52.618485 containerd[2013]: time="2025-08-12T23:42:52.618403378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:52.620217 containerd[2013]: time="2025-08-12T23:42:52.619745182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.85518459s" Aug 12 23:42:52.620217 containerd[2013]: time="2025-08-12T23:42:52.619801438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 12 23:42:52.622824 containerd[2013]: time="2025-08-12T23:42:52.622745458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 12 23:42:52.627162 containerd[2013]: time="2025-08-12T23:42:52.627115246Z" level=info msg="CreateContainer within sandbox \"42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 12 23:42:52.645069 containerd[2013]: time="2025-08-12T23:42:52.643285091Z" level=info msg="Container 7fd538e76993ee452241b8147f94b11753d169a70bf4a624fc09912d0eabbd8d: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:42:52.656458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825831325.mount: Deactivated successfully. Aug 12 23:42:52.668164 containerd[2013]: time="2025-08-12T23:42:52.667857227Z" level=info msg="CreateContainer within sandbox \"42b5ad20792f84e43fc38d2bfc7a484f8f741928dfbfc1fdca3f6af09e64595d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7fd538e76993ee452241b8147f94b11753d169a70bf4a624fc09912d0eabbd8d\"" Aug 12 23:42:52.669590 containerd[2013]: time="2025-08-12T23:42:52.669470303Z" level=info msg="StartContainer for \"7fd538e76993ee452241b8147f94b11753d169a70bf4a624fc09912d0eabbd8d\"" Aug 12 23:42:52.674487 containerd[2013]: time="2025-08-12T23:42:52.674432459Z" level=info msg="connecting to shim 7fd538e76993ee452241b8147f94b11753d169a70bf4a624fc09912d0eabbd8d" address="unix:///run/containerd/s/10958d8feb24fa8b3080c353cf88e37d51c86677337c1a9d1a360a18a350d263" protocol=ttrpc version=3 Aug 12 23:42:52.698679 containerd[2013]: time="2025-08-12T23:42:52.698189363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b585bd6-k8x9p,Uid:bd6c9aed-3c52-4cf3-b20a-0312b540cfff,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:42:52.732620 systemd[1]: Started cri-containerd-7fd538e76993ee452241b8147f94b11753d169a70bf4a624fc09912d0eabbd8d.scope - libcontainer container 7fd538e76993ee452241b8147f94b11753d169a70bf4a624fc09912d0eabbd8d. Aug 12 23:42:52.875981 containerd[2013]: time="2025-08-12T23:42:52.875709972Z" level=info msg="StartContainer for \"7fd538e76993ee452241b8147f94b11753d169a70bf4a624fc09912d0eabbd8d\" returns successfully" Aug 12 23:42:53.009179 systemd-networkd[1854]: calif7072cf5631: Link UP Aug 12 23:42:53.011876 systemd-networkd[1854]: calif7072cf5631: Gained carrier Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.813 [INFO][5094] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0 calico-apiserver-865b585bd6- calico-apiserver bd6c9aed-3c52-4cf3-b20a-0312b540cfff 834 0 2025-08-12 23:42:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:865b585bd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-88 calico-apiserver-865b585bd6-k8x9p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif7072cf5631 [] [] }} ContainerID="ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-k8x9p" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-" Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.813 [INFO][5094] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-k8x9p" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0" Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.907 [INFO][5114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" HandleID="k8s-pod-network.ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" Workload="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0" Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.908 [INFO][5114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" HandleID="k8s-pod-network.ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" Workload="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-88", "pod":"calico-apiserver-865b585bd6-k8x9p", "timestamp":"2025-08-12 23:42:52.907592232 +0000 UTC"}, Hostname:"ip-172-31-28-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.908 [INFO][5114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.908 [INFO][5114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.908 [INFO][5114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-88' Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.927 [INFO][5114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" host="ip-172-31-28-88" Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.939 [INFO][5114] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-88" Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.949 [INFO][5114] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.954 [INFO][5114] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.961 [INFO][5114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.961 [INFO][5114] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" host="ip-172-31-28-88" Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.964 [INFO][5114] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.974 [INFO][5114] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" host="ip-172-31-28-88" Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.984 [INFO][5114] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.4/26] block=192.168.69.0/26 handle="k8s-pod-network.ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" host="ip-172-31-28-88" Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.985 [INFO][5114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.4/26] handle="k8s-pod-network.ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" host="ip-172-31-28-88" Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.985 [INFO][5114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:42:53.057663 containerd[2013]: 2025-08-12 23:42:52.985 [INFO][5114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.4/26] IPv6=[] ContainerID="ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" HandleID="k8s-pod-network.ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" Workload="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0" Aug 12 23:42:53.059843 containerd[2013]: 2025-08-12 23:42:52.991 [INFO][5094] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-k8x9p" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0", GenerateName:"calico-apiserver-865b585bd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd6c9aed-3c52-4cf3-b20a-0312b540cfff", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865b585bd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"", Pod:"calico-apiserver-865b585bd6-k8x9p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7072cf5631", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:53.059843 containerd[2013]: 2025-08-12 23:42:52.992 [INFO][5094] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.4/32] ContainerID="ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-k8x9p" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0" Aug 12 23:42:53.059843 containerd[2013]: 2025-08-12 23:42:52.992 [INFO][5094] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7072cf5631 ContainerID="ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-k8x9p" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0" Aug 12 23:42:53.059843 containerd[2013]: 2025-08-12 23:42:53.013 [INFO][5094] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-k8x9p" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0" Aug 12 23:42:53.059843 containerd[2013]: 2025-08-12 23:42:53.015 [INFO][5094] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-k8x9p" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0", GenerateName:"calico-apiserver-865b585bd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd6c9aed-3c52-4cf3-b20a-0312b540cfff", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865b585bd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf", Pod:"calico-apiserver-865b585bd6-k8x9p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7072cf5631", MAC:"b2:d9:0e:2f:90:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:53.059843 containerd[2013]: 2025-08-12 23:42:53.048 [INFO][5094] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-k8x9p" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--k8x9p-eth0" Aug 12 23:42:53.153467 containerd[2013]: time="2025-08-12T23:42:53.152744961Z" level=info msg="connecting to shim ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf" address="unix:///run/containerd/s/b7ecb3aa6c6bd41990b40e7803c3ec35e3327462c53a3fd1a18d0af6e39b57b4" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:42:53.170504 kubelet[3515]: I0812 23:42:53.170426 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5fc66f597b-jxx8s" podStartSLOduration=1.853007203 podStartE2EDuration="7.170380953s" podCreationTimestamp="2025-08-12 23:42:46 +0000 UTC" firstStartedPulling="2025-08-12 23:42:47.30428878 +0000 UTC m=+51.872467447" lastFinishedPulling="2025-08-12 23:42:52.62166253 +0000 UTC m=+57.189841197" observedRunningTime="2025-08-12 23:42:53.170322285 +0000 UTC m=+57.738500964" watchObservedRunningTime="2025-08-12 23:42:53.170380953 +0000 UTC m=+57.738559620" Aug 12 23:42:53.228610 systemd[1]: Started cri-containerd-ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf.scope - libcontainer container ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf. Aug 12 23:42:53.325713 containerd[2013]: time="2025-08-12T23:42:53.325649602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b585bd6-k8x9p,Uid:bd6c9aed-3c52-4cf3-b20a-0312b540cfff,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf\"" Aug 12 23:42:53.697608 containerd[2013]: time="2025-08-12T23:42:53.697527024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df96dbd7c-shcc7,Uid:d908bbbe-0216-4d9e-bc2f-95bf8249336c,Namespace:calico-system,Attempt:0,}" Aug 12 23:42:53.697608 containerd[2013]: time="2025-08-12T23:42:53.697528224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b585bd6-h7tzr,Uid:31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:42:53.703040 containerd[2013]: time="2025-08-12T23:42:53.702747744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mmcwn,Uid:a8b776ad-ce4c-49ef-85b7-5aacc790ac0b,Namespace:kube-system,Attempt:0,}" Aug 12 23:42:53.703040 containerd[2013]: time="2025-08-12T23:42:53.702887688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-dbgt6,Uid:09ff9862-d31d-47c3-9c5e-b1b67b525562,Namespace:calico-system,Attempt:0,}" Aug 12 23:42:54.259164 systemd-networkd[1854]: calicaf35e81278: Link UP Aug 12 23:42:54.263704 systemd-networkd[1854]: calicaf35e81278: Gained carrier Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:53.945 [INFO][5201] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0 calico-kube-controllers-7df96dbd7c- calico-system d908bbbe-0216-4d9e-bc2f-95bf8249336c 833 0 2025-08-12 23:42:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7df96dbd7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-88 calico-kube-controllers-7df96dbd7c-shcc7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicaf35e81278 [] [] }} ContainerID="940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" Namespace="calico-system" Pod="calico-kube-controllers-7df96dbd7c-shcc7" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-" Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:53.951 [INFO][5201] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" Namespace="calico-system" Pod="calico-kube-controllers-7df96dbd7c-shcc7" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0" Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.099 [INFO][5245] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" HandleID="k8s-pod-network.940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" Workload="ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0" Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.100 [INFO][5245] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" HandleID="k8s-pod-network.940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" Workload="ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000334050), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-88", "pod":"calico-kube-controllers-7df96dbd7c-shcc7", "timestamp":"2025-08-12 23:42:54.098199478 +0000 UTC"}, Hostname:"ip-172-31-28-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.100 [INFO][5245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.101 [INFO][5245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.101 [INFO][5245] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-88' Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.127 [INFO][5245] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" host="ip-172-31-28-88" Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.141 [INFO][5245] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-88" Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.159 [INFO][5245] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.168 [INFO][5245] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.177 [INFO][5245] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.177 [INFO][5245] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" host="ip-172-31-28-88" Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.184 [INFO][5245] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779 Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.206 [INFO][5245] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" host="ip-172-31-28-88" Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.226 [INFO][5245] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.5/26] block=192.168.69.0/26 handle="k8s-pod-network.940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" host="ip-172-31-28-88" Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.227 [INFO][5245] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.5/26] handle="k8s-pod-network.940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" host="ip-172-31-28-88" Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.227 [INFO][5245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:42:54.319200 containerd[2013]: 2025-08-12 23:42:54.227 [INFO][5245] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.5/26] IPv6=[] ContainerID="940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" HandleID="k8s-pod-network.940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" Workload="ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0" Aug 12 23:42:54.321481 containerd[2013]: 2025-08-12 23:42:54.243 [INFO][5201] cni-plugin/k8s.go 418: Populated endpoint ContainerID="940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" Namespace="calico-system" Pod="calico-kube-controllers-7df96dbd7c-shcc7" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0", GenerateName:"calico-kube-controllers-7df96dbd7c-", Namespace:"calico-system", SelfLink:"", UID:"d908bbbe-0216-4d9e-bc2f-95bf8249336c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7df96dbd7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"", Pod:"calico-kube-controllers-7df96dbd7c-shcc7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicaf35e81278", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:54.321481 containerd[2013]: 2025-08-12 23:42:54.246 [INFO][5201] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.5/32] ContainerID="940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" Namespace="calico-system" Pod="calico-kube-controllers-7df96dbd7c-shcc7" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0" Aug 12 23:42:54.321481 containerd[2013]: 2025-08-12 23:42:54.246 [INFO][5201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicaf35e81278 ContainerID="940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" Namespace="calico-system" Pod="calico-kube-controllers-7df96dbd7c-shcc7" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0" Aug 12 23:42:54.321481 containerd[2013]: 2025-08-12 23:42:54.267 [INFO][5201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" Namespace="calico-system" Pod="calico-kube-controllers-7df96dbd7c-shcc7" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0" Aug 12 23:42:54.321481 containerd[2013]: 2025-08-12 23:42:54.273 [INFO][5201] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" Namespace="calico-system" Pod="calico-kube-controllers-7df96dbd7c-shcc7" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0", GenerateName:"calico-kube-controllers-7df96dbd7c-", Namespace:"calico-system", SelfLink:"", UID:"d908bbbe-0216-4d9e-bc2f-95bf8249336c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7df96dbd7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779", Pod:"calico-kube-controllers-7df96dbd7c-shcc7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicaf35e81278", MAC:"ca:d7:5b:62:df:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:54.321481 containerd[2013]: 2025-08-12 23:42:54.311 [INFO][5201] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" Namespace="calico-system" Pod="calico-kube-controllers-7df96dbd7c-shcc7" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--kube--controllers--7df96dbd7c--shcc7-eth0" Aug 12 23:42:54.395847 containerd[2013]: time="2025-08-12T23:42:54.395697983Z" level=info msg="connecting to shim 940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779" address="unix:///run/containerd/s/89c432044a8925f109d7f11e5415180b6da7cc8c7f369d839829eddaed5f09f1" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:42:54.437751 systemd-networkd[1854]: cali23ec5d5b849: Link UP Aug 12 23:42:54.441186 systemd-networkd[1854]: cali23ec5d5b849: Gained carrier Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:53.916 [INFO][5220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0 coredns-668d6bf9bc- kube-system a8b776ad-ce4c-49ef-85b7-5aacc790ac0b 831 0 2025-08-12 23:42:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-88 coredns-668d6bf9bc-mmcwn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali23ec5d5b849 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmcwn" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-" Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:53.916 [INFO][5220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmcwn" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0" Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.129 [INFO][5240] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" HandleID="k8s-pod-network.1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" Workload="ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0" Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.135 [INFO][5240] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" HandleID="k8s-pod-network.1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" Workload="ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000341eb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-88", "pod":"coredns-668d6bf9bc-mmcwn", "timestamp":"2025-08-12 23:42:54.129547942 +0000 UTC"}, Hostname:"ip-172-31-28-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.135 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.230 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.230 [INFO][5240] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-88' Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.257 [INFO][5240] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" host="ip-172-31-28-88" Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.294 [INFO][5240] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-88" Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.323 [INFO][5240] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.331 [INFO][5240] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.346 [INFO][5240] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.346 [INFO][5240] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" host="ip-172-31-28-88" Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.361 [INFO][5240] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.376 [INFO][5240] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" host="ip-172-31-28-88" Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.398 [INFO][5240] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.6/26] block=192.168.69.0/26 handle="k8s-pod-network.1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" host="ip-172-31-28-88" Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.399 [INFO][5240] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.6/26] handle="k8s-pod-network.1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" host="ip-172-31-28-88" Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.400 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:42:54.500587 containerd[2013]: 2025-08-12 23:42:54.400 [INFO][5240] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.6/26] IPv6=[] ContainerID="1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" HandleID="k8s-pod-network.1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" Workload="ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0" Aug 12 23:42:54.502838 containerd[2013]: 2025-08-12 23:42:54.420 [INFO][5220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmcwn" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a8b776ad-ce4c-49ef-85b7-5aacc790ac0b", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"", Pod:"coredns-668d6bf9bc-mmcwn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23ec5d5b849", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:54.502838 containerd[2013]: 2025-08-12 23:42:54.422 [INFO][5220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.6/32] ContainerID="1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmcwn" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0" Aug 12 23:42:54.502838 containerd[2013]: 2025-08-12 23:42:54.423 [INFO][5220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23ec5d5b849 ContainerID="1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmcwn" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0" Aug 12 23:42:54.502838 containerd[2013]: 2025-08-12 23:42:54.444 [INFO][5220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmcwn" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0" Aug 12 23:42:54.502838 containerd[2013]: 2025-08-12 23:42:54.450 [INFO][5220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmcwn" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a8b776ad-ce4c-49ef-85b7-5aacc790ac0b", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b", Pod:"coredns-668d6bf9bc-mmcwn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23ec5d5b849", MAC:"6a:fb:13:1e:f3:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:54.502838 containerd[2013]: 2025-08-12 23:42:54.488 [INFO][5220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmcwn" WorkloadEndpoint="ip--172--31--28--88-k8s-coredns--668d6bf9bc--mmcwn-eth0" Aug 12 23:42:54.554670 systemd[1]: Started cri-containerd-940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779.scope - libcontainer container 940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779. Aug 12 23:42:54.566419 systemd-networkd[1854]: calif7072cf5631: Gained IPv6LL Aug 12 23:42:54.619273 containerd[2013]: time="2025-08-12T23:42:54.617611152Z" level=info msg="connecting to shim 1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b" address="unix:///run/containerd/s/92ee751424d297f6ecdbaaaee3dd6538a91d7376b87bda67c015af93d4d88852" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:42:54.719054 systemd-networkd[1854]: calicf0d6093ae4: Link UP Aug 12 23:42:54.724490 systemd-networkd[1854]: calicf0d6093ae4: Gained carrier Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.011 [INFO][5188] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0 calico-apiserver-865b585bd6- calico-apiserver 31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b 836 0 2025-08-12 23:42:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:865b585bd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-88 calico-apiserver-865b585bd6-h7tzr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicf0d6093ae4 [] [] }} ContainerID="58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-h7tzr" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-" Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.011 [INFO][5188] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-h7tzr" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0" Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.203 [INFO][5253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" HandleID="k8s-pod-network.58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" Workload="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0" Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.204 [INFO][5253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" HandleID="k8s-pod-network.58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" Workload="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005e00d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-88", "pod":"calico-apiserver-865b585bd6-h7tzr", "timestamp":"2025-08-12 23:42:54.203802238 +0000 UTC"}, Hostname:"ip-172-31-28-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.204 [INFO][5253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.400 [INFO][5253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.402 [INFO][5253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-88' Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.460 [INFO][5253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" host="ip-172-31-28-88" Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.504 [INFO][5253] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-88" Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.534 [INFO][5253] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.547 [INFO][5253] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.570 [INFO][5253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.571 [INFO][5253] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" host="ip-172-31-28-88" Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.578 [INFO][5253] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9 Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.596 [INFO][5253] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" host="ip-172-31-28-88" Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.623 [INFO][5253] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.7/26] block=192.168.69.0/26 handle="k8s-pod-network.58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" host="ip-172-31-28-88" Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.624 [INFO][5253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.7/26] handle="k8s-pod-network.58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" host="ip-172-31-28-88" Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.625 [INFO][5253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:42:54.810676 containerd[2013]: 2025-08-12 23:42:54.626 [INFO][5253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.7/26] IPv6=[] ContainerID="58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" HandleID="k8s-pod-network.58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" Workload="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0" Aug 12 23:42:54.816608 containerd[2013]: 2025-08-12 23:42:54.646 [INFO][5188] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-h7tzr" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0", GenerateName:"calico-apiserver-865b585bd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865b585bd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"", Pod:"calico-apiserver-865b585bd6-h7tzr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf0d6093ae4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:54.816608 containerd[2013]: 2025-08-12 23:42:54.646 [INFO][5188] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.7/32] ContainerID="58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-h7tzr" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0" Aug 12 23:42:54.816608 containerd[2013]: 2025-08-12 23:42:54.646 [INFO][5188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf0d6093ae4 ContainerID="58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-h7tzr" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0" Aug 12 23:42:54.816608 containerd[2013]: 2025-08-12 23:42:54.728 [INFO][5188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-h7tzr" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0" Aug 12 23:42:54.816608 containerd[2013]: 2025-08-12 23:42:54.730 [INFO][5188] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-h7tzr" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0", GenerateName:"calico-apiserver-865b585bd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865b585bd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9", Pod:"calico-apiserver-865b585bd6-h7tzr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf0d6093ae4", MAC:"2e:9b:dc:e0:11:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:54.816608 containerd[2013]: 2025-08-12 23:42:54.776 [INFO][5188] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" Namespace="calico-apiserver" Pod="calico-apiserver-865b585bd6-h7tzr" WorkloadEndpoint="ip--172--31--28--88-k8s-calico--apiserver--865b585bd6--h7tzr-eth0" Aug 12 23:42:54.815688 systemd[1]: Started cri-containerd-1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b.scope - libcontainer container 1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b. Aug 12 23:42:54.887679 containerd[2013]: time="2025-08-12T23:42:54.887607002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:54.899165 containerd[2013]: time="2025-08-12T23:42:54.897632426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 12 23:42:54.900032 containerd[2013]: time="2025-08-12T23:42:54.899979038Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:54.931405 systemd-networkd[1854]: cali6c9876227c9: Link UP Aug 12 23:42:54.936220 systemd-networkd[1854]: cali6c9876227c9: Gained carrier Aug 12 23:42:54.936491 containerd[2013]: time="2025-08-12T23:42:54.934150622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:54.938034 containerd[2013]: time="2025-08-12T23:42:54.937269782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 2.31387798s" Aug 12 23:42:54.938318 containerd[2013]: time="2025-08-12T23:42:54.938100638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 12 23:42:54.945310 containerd[2013]: time="2025-08-12T23:42:54.944530562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:42:54.956706 containerd[2013]: time="2025-08-12T23:42:54.956563802Z" level=info msg="CreateContainer within sandbox \"41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 12 23:42:54.969111 containerd[2013]: time="2025-08-12T23:42:54.969034118Z" level=info msg="connecting to shim 58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9" address="unix:///run/containerd/s/7c54244cdf6a2dca3407501d54cf87b087eb5001d22653ded442d42be9a789b6" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.008 [INFO][5192] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0 goldmane-768f4c5c69- calico-system 09ff9862-d31d-47c3-9c5e-b1b67b525562 835 0 2025-08-12 23:42:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-88 goldmane-768f4c5c69-dbgt6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6c9876227c9 [] [] }} ContainerID="86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" Namespace="calico-system" Pod="goldmane-768f4c5c69-dbgt6" WorkloadEndpoint="ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-" Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.009 [INFO][5192] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" Namespace="calico-system" Pod="goldmane-768f4c5c69-dbgt6" WorkloadEndpoint="ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0" Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.235 [INFO][5251] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" HandleID="k8s-pod-network.86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" Workload="ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0" Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.235 [INFO][5251] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" HandleID="k8s-pod-network.86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" Workload="ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002440b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-88", "pod":"goldmane-768f4c5c69-dbgt6", "timestamp":"2025-08-12 23:42:54.235498558 +0000 UTC"}, Hostname:"ip-172-31-28-88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.235 [INFO][5251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.626 [INFO][5251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.626 [INFO][5251] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-88' Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.726 [INFO][5251] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" host="ip-172-31-28-88" Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.748 [INFO][5251] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-88" Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.781 [INFO][5251] ipam/ipam.go 511: Trying affinity for 192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.793 [INFO][5251] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.808 [INFO][5251] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.0/26 host="ip-172-31-28-88" Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.808 [INFO][5251] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.69.0/26 handle="k8s-pod-network.86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" host="ip-172-31-28-88" Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.821 [INFO][5251] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.840 [INFO][5251] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.69.0/26 handle="k8s-pod-network.86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" host="ip-172-31-28-88" Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.883 [INFO][5251] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.69.8/26] block=192.168.69.0/26 handle="k8s-pod-network.86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" host="ip-172-31-28-88" Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.883 [INFO][5251] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.8/26] handle="k8s-pod-network.86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" host="ip-172-31-28-88" Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.892 [INFO][5251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:42:55.017640 containerd[2013]: 2025-08-12 23:42:54.892 [INFO][5251] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.69.8/26] IPv6=[] ContainerID="86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" HandleID="k8s-pod-network.86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" Workload="ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0" Aug 12 23:42:55.026030 containerd[2013]: 2025-08-12 23:42:54.905 [INFO][5192] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" Namespace="calico-system" Pod="goldmane-768f4c5c69-dbgt6" WorkloadEndpoint="ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"09ff9862-d31d-47c3-9c5e-b1b67b525562", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"", Pod:"goldmane-768f4c5c69-dbgt6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6c9876227c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:55.026030 containerd[2013]: 2025-08-12 23:42:54.906 [INFO][5192] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.8/32] ContainerID="86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" Namespace="calico-system" Pod="goldmane-768f4c5c69-dbgt6" WorkloadEndpoint="ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0" Aug 12 23:42:55.026030 containerd[2013]: 2025-08-12 23:42:54.906 [INFO][5192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c9876227c9 ContainerID="86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" Namespace="calico-system" Pod="goldmane-768f4c5c69-dbgt6" WorkloadEndpoint="ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0" Aug 12 23:42:55.026030 containerd[2013]: 2025-08-12 23:42:54.934 [INFO][5192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" Namespace="calico-system" Pod="goldmane-768f4c5c69-dbgt6" WorkloadEndpoint="ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0" Aug 12 23:42:55.026030 containerd[2013]: 2025-08-12 23:42:54.938 [INFO][5192] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" Namespace="calico-system" Pod="goldmane-768f4c5c69-dbgt6" WorkloadEndpoint="ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"09ff9862-d31d-47c3-9c5e-b1b67b525562", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 42, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-88", ContainerID:"86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c", Pod:"goldmane-768f4c5c69-dbgt6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6c9876227c9", MAC:"fa:9b:9c:33:1c:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:42:55.026030 containerd[2013]: 2025-08-12 23:42:54.992 [INFO][5192] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" Namespace="calico-system" Pod="goldmane-768f4c5c69-dbgt6" WorkloadEndpoint="ip--172--31--28--88-k8s-goldmane--768f4c5c69--dbgt6-eth0" Aug 12 23:42:55.063902 containerd[2013]: time="2025-08-12T23:42:55.063773327Z" level=info msg="Container 689a56cbbcb5bbe2b1b360b7727dd788eb3a84a1b51f3d9561907ce58c0be540: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:42:55.126681 systemd[1]: Started cri-containerd-58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9.scope - libcontainer container 58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9. Aug 12 23:42:55.129021 containerd[2013]: time="2025-08-12T23:42:55.128955743Z" level=info msg="CreateContainer within sandbox \"41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"689a56cbbcb5bbe2b1b360b7727dd788eb3a84a1b51f3d9561907ce58c0be540\"" Aug 12 23:42:55.138542 containerd[2013]: time="2025-08-12T23:42:55.134319971Z" level=info msg="StartContainer for \"689a56cbbcb5bbe2b1b360b7727dd788eb3a84a1b51f3d9561907ce58c0be540\"" Aug 12 23:42:55.157593 containerd[2013]: time="2025-08-12T23:42:55.157379819Z" level=info msg="connecting to shim 689a56cbbcb5bbe2b1b360b7727dd788eb3a84a1b51f3d9561907ce58c0be540" address="unix:///run/containerd/s/3c4772cfa64935568437f082ba44063be69ab722aae37ad6496c2daf2543bded" protocol=ttrpc version=3 Aug 12 23:42:55.171946 containerd[2013]: time="2025-08-12T23:42:55.171867551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mmcwn,Uid:a8b776ad-ce4c-49ef-85b7-5aacc790ac0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b\"" Aug 12 23:42:55.179252 containerd[2013]: time="2025-08-12T23:42:55.178819775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df96dbd7c-shcc7,Uid:d908bbbe-0216-4d9e-bc2f-95bf8249336c,Namespace:calico-system,Attempt:0,} returns sandbox id \"940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779\"" Aug 12 23:42:55.197908 containerd[2013]: time="2025-08-12T23:42:55.197464343Z" level=info msg="CreateContainer within sandbox \"1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:42:55.211752 containerd[2013]: time="2025-08-12T23:42:55.211677575Z" level=info msg="connecting to shim 86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c" address="unix:///run/containerd/s/e2e5996539b4dc45ee3f7120a24886aefafd91e675efaea54305aaad0e51340a" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:42:55.244178 containerd[2013]: time="2025-08-12T23:42:55.244098083Z" level=info msg="Container 3b61c12fe7404852f475c14699ef4ca6ee8bb42fdadcd5409948e27a4351196c: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:42:55.246623 systemd[1]: Started cri-containerd-689a56cbbcb5bbe2b1b360b7727dd788eb3a84a1b51f3d9561907ce58c0be540.scope - libcontainer container 689a56cbbcb5bbe2b1b360b7727dd788eb3a84a1b51f3d9561907ce58c0be540. Aug 12 23:42:55.297009 containerd[2013]: time="2025-08-12T23:42:55.296480988Z" level=info msg="CreateContainer within sandbox \"1f40d76c33282a13372a785827007a5a7146ea32f1882f2f934607f0c2eba78b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b61c12fe7404852f475c14699ef4ca6ee8bb42fdadcd5409948e27a4351196c\"" Aug 12 23:42:55.302256 containerd[2013]: time="2025-08-12T23:42:55.301095828Z" level=info msg="StartContainer for \"3b61c12fe7404852f475c14699ef4ca6ee8bb42fdadcd5409948e27a4351196c\"" Aug 12 23:42:55.307012 systemd[1]: Started cri-containerd-86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c.scope - libcontainer container 86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c. Aug 12 23:42:55.308845 containerd[2013]: time="2025-08-12T23:42:55.308596608Z" level=info msg="connecting to shim 3b61c12fe7404852f475c14699ef4ca6ee8bb42fdadcd5409948e27a4351196c" address="unix:///run/containerd/s/92ee751424d297f6ecdbaaaee3dd6538a91d7376b87bda67c015af93d4d88852" protocol=ttrpc version=3 Aug 12 23:42:55.377681 systemd[1]: Started cri-containerd-3b61c12fe7404852f475c14699ef4ca6ee8bb42fdadcd5409948e27a4351196c.scope - libcontainer container 3b61c12fe7404852f475c14699ef4ca6ee8bb42fdadcd5409948e27a4351196c. Aug 12 23:42:55.477032 containerd[2013]: time="2025-08-12T23:42:55.476577805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865b585bd6-h7tzr,Uid:31ccd3c2-6b19-4884-a3e2-dbbd56ff7d5b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9\"" Aug 12 23:42:55.519253 containerd[2013]: time="2025-08-12T23:42:55.518842813Z" level=info msg="StartContainer for \"3b61c12fe7404852f475c14699ef4ca6ee8bb42fdadcd5409948e27a4351196c\" returns successfully" Aug 12 23:42:55.578792 containerd[2013]: time="2025-08-12T23:42:55.567181561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-dbgt6,Uid:09ff9862-d31d-47c3-9c5e-b1b67b525562,Namespace:calico-system,Attempt:0,} returns sandbox id \"86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c\"" Aug 12 23:42:55.579076 containerd[2013]: time="2025-08-12T23:42:55.578756173Z" level=info msg="StartContainer for \"689a56cbbcb5bbe2b1b360b7727dd788eb3a84a1b51f3d9561907ce58c0be540\" returns successfully" Aug 12 23:42:55.591361 systemd-networkd[1854]: calicaf35e81278: Gained IPv6LL Aug 12 23:42:56.233697 systemd-networkd[1854]: cali23ec5d5b849: Gained IPv6LL Aug 12 23:42:56.252958 kubelet[3515]: I0812 23:42:56.252775 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mmcwn" podStartSLOduration=54.252728976 podStartE2EDuration="54.252728976s" podCreationTimestamp="2025-08-12 23:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:42:56.251988888 +0000 UTC m=+60.820167555" watchObservedRunningTime="2025-08-12 23:42:56.252728976 +0000 UTC m=+60.820907655" Aug 12 23:42:56.294534 systemd-networkd[1854]: cali6c9876227c9: Gained IPv6LL Aug 12 23:42:56.295008 systemd-networkd[1854]: calicf0d6093ae4: Gained IPv6LL Aug 12 23:42:57.771292 containerd[2013]: time="2025-08-12T23:42:57.771170404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:57.775184 containerd[2013]: time="2025-08-12T23:42:57.775107484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 12 23:42:57.777082 containerd[2013]: time="2025-08-12T23:42:57.776994220Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:57.783466 containerd[2013]: time="2025-08-12T23:42:57.783272152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:42:57.786482 containerd[2013]: time="2025-08-12T23:42:57.786380392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.841786626s" Aug 12 23:42:57.786482 containerd[2013]: time="2025-08-12T23:42:57.786436756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:42:57.790003 containerd[2013]: time="2025-08-12T23:42:57.789496720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 12 23:42:57.793899 containerd[2013]: time="2025-08-12T23:42:57.793643548Z" level=info msg="CreateContainer within sandbox \"ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:42:57.812958 containerd[2013]: time="2025-08-12T23:42:57.811694920Z" level=info msg="Container ab04ac1eba302a4d1f34def46028ec75a6c7ade950d07096abfdff6470e0febf: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:42:57.825465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069537474.mount: Deactivated successfully. Aug 12 23:42:57.836090 containerd[2013]: time="2025-08-12T23:42:57.836021644Z" level=info msg="CreateContainer within sandbox \"ad7be6fb968aa2f3f07cd1765f5ae43a4b6bd40e88b2657a201a6ad8bdd278bf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ab04ac1eba302a4d1f34def46028ec75a6c7ade950d07096abfdff6470e0febf\"" Aug 12 23:42:57.838146 containerd[2013]: time="2025-08-12T23:42:57.838055056Z" level=info msg="StartContainer for \"ab04ac1eba302a4d1f34def46028ec75a6c7ade950d07096abfdff6470e0febf\"" Aug 12 23:42:57.840319 containerd[2013]: time="2025-08-12T23:42:57.840190720Z" level=info msg="connecting to shim ab04ac1eba302a4d1f34def46028ec75a6c7ade950d07096abfdff6470e0febf" address="unix:///run/containerd/s/b7ecb3aa6c6bd41990b40e7803c3ec35e3327462c53a3fd1a18d0af6e39b57b4" protocol=ttrpc version=3 Aug 12 23:42:57.889620 systemd[1]: Started cri-containerd-ab04ac1eba302a4d1f34def46028ec75a6c7ade950d07096abfdff6470e0febf.scope - libcontainer container ab04ac1eba302a4d1f34def46028ec75a6c7ade950d07096abfdff6470e0febf. Aug 12 23:42:57.985799 containerd[2013]: time="2025-08-12T23:42:57.985753097Z" level=info msg="StartContainer for \"ab04ac1eba302a4d1f34def46028ec75a6c7ade950d07096abfdff6470e0febf\" returns successfully" Aug 12 23:42:58.980088 ntpd[1981]: Listen normally on 8 vxlan.calico 192.168.69.0:123 Aug 12 23:42:58.980220 ntpd[1981]: Listen normally on 9 cali9ac78e30238 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 12 23:42:58.982373 ntpd[1981]: 12 Aug 23:42:58 ntpd[1981]: Listen normally on 8 vxlan.calico 192.168.69.0:123 Aug 12 23:42:58.982373 ntpd[1981]: 12 Aug 23:42:58 ntpd[1981]: Listen normally on 9 cali9ac78e30238 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 12 23:42:58.982373 ntpd[1981]: 12 Aug 23:42:58 ntpd[1981]: Listen normally on 10 vxlan.calico [fe80::64e8:58ff:fe38:b186%5]:123 Aug 12 23:42:58.982373 ntpd[1981]: 12 Aug 23:42:58 ntpd[1981]: Listen normally on 11 calif45aa81c233 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 12 23:42:58.982373 ntpd[1981]: 12 Aug 23:42:58 ntpd[1981]: Listen normally on 12 calia96a8db9348 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 12 23:42:58.982373 ntpd[1981]: 12 Aug 23:42:58 ntpd[1981]: Listen normally on 13 calif7072cf5631 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 12 23:42:58.982373 ntpd[1981]: 12 Aug 23:42:58 ntpd[1981]: Listen normally on 14 calicaf35e81278 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 12 23:42:58.982373 ntpd[1981]: 12 Aug 23:42:58 ntpd[1981]: Listen normally on 15 cali23ec5d5b849 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 12 23:42:58.982373 ntpd[1981]: 12 Aug 23:42:58 ntpd[1981]: Listen normally on 16 calicf0d6093ae4 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 12 23:42:58.982373 ntpd[1981]: 12 Aug 23:42:58 ntpd[1981]: Listen normally on 17 cali6c9876227c9 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 12 23:42:58.980363 ntpd[1981]: Listen normally on 10 vxlan.calico [fe80::64e8:58ff:fe38:b186%5]:123 Aug 12 23:42:58.980957 ntpd[1981]: Listen normally on 11 calif45aa81c233 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 12 23:42:58.981039 ntpd[1981]: Listen normally on 12 calia96a8db9348 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 12 23:42:58.981105 ntpd[1981]: Listen normally on 13 calif7072cf5631 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 12 23:42:58.981169 ntpd[1981]: Listen normally on 14 calicaf35e81278 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 12 23:42:58.981332 ntpd[1981]: Listen normally on 15 cali23ec5d5b849 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 12 23:42:58.981407 ntpd[1981]: Listen normally on 16 calicf0d6093ae4 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 12 23:42:58.981474 ntpd[1981]: Listen normally on 17 cali6c9876227c9 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 12 23:42:59.246250 kubelet[3515]: I0812 23:42:59.246069 3515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:43:01.411704 systemd[1]: Started sshd@9-172.31.28.88:22-139.178.68.195:40786.service - OpenSSH per-connection server daemon (139.178.68.195:40786). Aug 12 23:43:01.633145 sshd[5639]: Accepted publickey for core from 139.178.68.195 port 40786 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:01.637976 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:01.656351 systemd-logind[1986]: New session 10 of user core. Aug 12 23:43:01.661738 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 12 23:43:02.078848 sshd[5641]: Connection closed by 139.178.68.195 port 40786 Aug 12 23:43:02.079339 sshd-session[5639]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:02.092773 systemd[1]: sshd@9-172.31.28.88:22-139.178.68.195:40786.service: Deactivated successfully. Aug 12 23:43:02.101109 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:43:02.105929 systemd-logind[1986]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:43:02.110484 systemd-logind[1986]: Removed session 10. Aug 12 23:43:02.736416 containerd[2013]: time="2025-08-12T23:43:02.736346121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:43:02.738351 containerd[2013]: time="2025-08-12T23:43:02.738271833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 12 23:43:02.740849 containerd[2013]: time="2025-08-12T23:43:02.740767485Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:43:02.745488 containerd[2013]: time="2025-08-12T23:43:02.745421925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:43:02.747018 containerd[2013]: time="2025-08-12T23:43:02.746806137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 4.956415225s" Aug 12 23:43:02.747018 containerd[2013]: time="2025-08-12T23:43:02.746862765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 12 23:43:02.749653 containerd[2013]: time="2025-08-12T23:43:02.749298081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:43:02.784903 containerd[2013]: time="2025-08-12T23:43:02.784731105Z" level=info msg="CreateContainer within sandbox \"940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 12 23:43:02.816568 containerd[2013]: time="2025-08-12T23:43:02.816514185Z" level=info msg="Container efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:43:02.839718 containerd[2013]: time="2025-08-12T23:43:02.839658249Z" level=info msg="CreateContainer within sandbox \"940d043bce15a2b2e38a94a7e3f14246c8f035a57fd0804b25ae868efb299779\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb\"" Aug 12 23:43:02.841107 containerd[2013]: time="2025-08-12T23:43:02.841045053Z" level=info msg="StartContainer for \"efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb\"" Aug 12 23:43:02.848562 containerd[2013]: time="2025-08-12T23:43:02.847513833Z" level=info msg="connecting to shim efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb" address="unix:///run/containerd/s/89c432044a8925f109d7f11e5415180b6da7cc8c7f369d839829eddaed5f09f1" protocol=ttrpc version=3 Aug 12 23:43:02.896892 systemd[1]: Started cri-containerd-efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb.scope - libcontainer container efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb. Aug 12 23:43:03.042354 containerd[2013]: time="2025-08-12T23:43:03.042145758Z" level=info msg="StartContainer for \"efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb\" returns successfully" Aug 12 23:43:03.081693 containerd[2013]: time="2025-08-12T23:43:03.080437110Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:43:03.085161 containerd[2013]: time="2025-08-12T23:43:03.085112178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 12 23:43:03.089479 containerd[2013]: time="2025-08-12T23:43:03.089400450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 340.036981ms" Aug 12 23:43:03.089723 containerd[2013]: time="2025-08-12T23:43:03.089689482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:43:03.092616 containerd[2013]: time="2025-08-12T23:43:03.092571570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 12 23:43:03.101032 containerd[2013]: time="2025-08-12T23:43:03.100971162Z" level=info msg="CreateContainer within sandbox \"58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:43:03.121250 containerd[2013]: time="2025-08-12T23:43:03.119574691Z" level=info msg="Container dffa71563597db05669fb401cf5239fcad366d4db2afc192d08e02ebb477263d: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:43:03.148941 containerd[2013]: time="2025-08-12T23:43:03.148798279Z" level=info msg="CreateContainer within sandbox \"58c1ada224ae229d26bd34623233235abb074fb4c9d4f5c6075eb3916a7d7dd9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dffa71563597db05669fb401cf5239fcad366d4db2afc192d08e02ebb477263d\"" Aug 12 23:43:03.151565 containerd[2013]: time="2025-08-12T23:43:03.151278427Z" level=info msg="StartContainer for \"dffa71563597db05669fb401cf5239fcad366d4db2afc192d08e02ebb477263d\"" Aug 12 23:43:03.156152 containerd[2013]: time="2025-08-12T23:43:03.155759239Z" level=info msg="connecting to shim dffa71563597db05669fb401cf5239fcad366d4db2afc192d08e02ebb477263d" address="unix:///run/containerd/s/7c54244cdf6a2dca3407501d54cf87b087eb5001d22653ded442d42be9a789b6" protocol=ttrpc version=3 Aug 12 23:43:03.199997 systemd[1]: Started cri-containerd-dffa71563597db05669fb401cf5239fcad366d4db2afc192d08e02ebb477263d.scope - libcontainer container dffa71563597db05669fb401cf5239fcad366d4db2afc192d08e02ebb477263d. Aug 12 23:43:03.338889 kubelet[3515]: I0812 23:43:03.338599 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7df96dbd7c-shcc7" podStartSLOduration=27.780339434 podStartE2EDuration="35.338575232s" podCreationTimestamp="2025-08-12 23:42:28 +0000 UTC" firstStartedPulling="2025-08-12 23:42:55.190360835 +0000 UTC m=+59.758539502" lastFinishedPulling="2025-08-12 23:43:02.748596561 +0000 UTC m=+67.316775300" observedRunningTime="2025-08-12 23:43:03.337957772 +0000 UTC m=+67.906136451" watchObservedRunningTime="2025-08-12 23:43:03.338575232 +0000 UTC m=+67.906753899" Aug 12 23:43:03.340833 kubelet[3515]: I0812 23:43:03.339662 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-865b585bd6-k8x9p" podStartSLOduration=42.87832077 podStartE2EDuration="47.339641708s" podCreationTimestamp="2025-08-12 23:42:16 +0000 UTC" firstStartedPulling="2025-08-12 23:42:53.32783677 +0000 UTC m=+57.896015425" lastFinishedPulling="2025-08-12 23:42:57.789157708 +0000 UTC m=+62.357336363" observedRunningTime="2025-08-12 23:42:58.264749582 +0000 UTC m=+62.832928249" watchObservedRunningTime="2025-08-12 23:43:03.339641708 +0000 UTC m=+67.907820447" Aug 12 23:43:03.486849 containerd[2013]: time="2025-08-12T23:43:03.486763220Z" level=info msg="StartContainer for \"dffa71563597db05669fb401cf5239fcad366d4db2afc192d08e02ebb477263d\" returns successfully" Aug 12 23:43:03.575212 containerd[2013]: time="2025-08-12T23:43:03.575082525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb\" id:\"01f0def98817f14e35fbd7f0a8d2090526772793994ab01b16d586999214d08c\" pid:5745 exited_at:{seconds:1755042183 nanos:574469793}" Aug 12 23:43:03.771417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2049465080.mount: Deactivated successfully. Aug 12 23:43:04.328553 kubelet[3515]: I0812 23:43:04.328433 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-865b585bd6-h7tzr" podStartSLOduration=40.71969428 podStartE2EDuration="48.328383465s" podCreationTimestamp="2025-08-12 23:42:16 +0000 UTC" firstStartedPulling="2025-08-12 23:42:55.482730541 +0000 UTC m=+60.050909196" lastFinishedPulling="2025-08-12 23:43:03.091419726 +0000 UTC m=+67.659598381" observedRunningTime="2025-08-12 23:43:04.324244221 +0000 UTC m=+68.892422936" watchObservedRunningTime="2025-08-12 23:43:04.328383465 +0000 UTC m=+68.896562132" Aug 12 23:43:05.302520 kubelet[3515]: I0812 23:43:05.301952 3515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:43:07.129627 systemd[1]: Started sshd@10-172.31.28.88:22-139.178.68.195:40800.service - OpenSSH per-connection server daemon (139.178.68.195:40800). Aug 12 23:43:07.220580 containerd[2013]: time="2025-08-12T23:43:07.220189391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:43:07.223332 containerd[2013]: time="2025-08-12T23:43:07.222085439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 12 23:43:07.225007 containerd[2013]: time="2025-08-12T23:43:07.224882231Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:43:07.235450 containerd[2013]: time="2025-08-12T23:43:07.235360271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:43:07.240740 containerd[2013]: time="2025-08-12T23:43:07.240657815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 4.147880313s" Aug 12 23:43:07.243086 containerd[2013]: time="2025-08-12T23:43:07.243002195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 12 23:43:07.245733 containerd[2013]: time="2025-08-12T23:43:07.245452739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 12 23:43:07.248781 containerd[2013]: time="2025-08-12T23:43:07.248683823Z" level=info msg="CreateContainer within sandbox \"41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 12 23:43:07.278767 containerd[2013]: time="2025-08-12T23:43:07.278699075Z" level=info msg="Container 3f6e65ead9dc452331f53dbd84ecc0ef5220cff6f4ae173f28a4d07d92861f4b: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:43:07.305799 containerd[2013]: time="2025-08-12T23:43:07.305733599Z" level=info msg="CreateContainer within sandbox \"41e86c8f69cec34f0d3c903a63d5b7e5c90a16f7f4fb9969fbb19665c4ebabaa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3f6e65ead9dc452331f53dbd84ecc0ef5220cff6f4ae173f28a4d07d92861f4b\"" Aug 12 23:43:07.309030 containerd[2013]: time="2025-08-12T23:43:07.308952503Z" level=info msg="StartContainer for \"3f6e65ead9dc452331f53dbd84ecc0ef5220cff6f4ae173f28a4d07d92861f4b\"" Aug 12 23:43:07.328364 containerd[2013]: time="2025-08-12T23:43:07.328294823Z" level=info msg="connecting to shim 3f6e65ead9dc452331f53dbd84ecc0ef5220cff6f4ae173f28a4d07d92861f4b" address="unix:///run/containerd/s/3c4772cfa64935568437f082ba44063be69ab722aae37ad6496c2daf2543bded" protocol=ttrpc version=3 Aug 12 23:43:07.418437 sshd[5790]: Accepted publickey for core from 139.178.68.195 port 40800 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:07.432869 sshd-session[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:07.467092 systemd-logind[1986]: New session 11 of user core. Aug 12 23:43:07.471606 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 12 23:43:07.515747 systemd[1]: Started cri-containerd-3f6e65ead9dc452331f53dbd84ecc0ef5220cff6f4ae173f28a4d07d92861f4b.scope - libcontainer container 3f6e65ead9dc452331f53dbd84ecc0ef5220cff6f4ae173f28a4d07d92861f4b. Aug 12 23:43:07.720056 containerd[2013]: time="2025-08-12T23:43:07.719895037Z" level=info msg="StartContainer for \"3f6e65ead9dc452331f53dbd84ecc0ef5220cff6f4ae173f28a4d07d92861f4b\" returns successfully" Aug 12 23:43:07.819191 sshd[5805]: Connection closed by 139.178.68.195 port 40800 Aug 12 23:43:07.820133 sshd-session[5790]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:07.828181 systemd[1]: sshd@10-172.31.28.88:22-139.178.68.195:40800.service: Deactivated successfully. Aug 12 23:43:07.832005 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:43:07.833822 systemd-logind[1986]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:43:07.838604 systemd-logind[1986]: Removed session 11. Aug 12 23:43:07.905141 kubelet[3515]: I0812 23:43:07.905077 3515 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 12 23:43:07.906289 kubelet[3515]: I0812 23:43:07.905795 3515 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 12 23:43:08.540384 kubelet[3515]: I0812 23:43:08.539977 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8sh6l" podStartSLOduration=24.575771602 podStartE2EDuration="40.539954893s" podCreationTimestamp="2025-08-12 23:42:28 +0000 UTC" firstStartedPulling="2025-08-12 23:42:51.28085996 +0000 UTC m=+55.849038627" lastFinishedPulling="2025-08-12 23:43:07.245043263 +0000 UTC m=+71.813221918" observedRunningTime="2025-08-12 23:43:08.535993189 +0000 UTC m=+73.104171868" watchObservedRunningTime="2025-08-12 23:43:08.539954893 +0000 UTC m=+73.108133560" Aug 12 23:43:09.322753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156081921.mount: Deactivated successfully. Aug 12 23:43:10.246262 containerd[2013]: time="2025-08-12T23:43:10.246027098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:43:10.249329 containerd[2013]: time="2025-08-12T23:43:10.249271790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 12 23:43:10.251871 containerd[2013]: time="2025-08-12T23:43:10.251803370Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:43:10.256520 containerd[2013]: time="2025-08-12T23:43:10.256431746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:43:10.258148 containerd[2013]: time="2025-08-12T23:43:10.257934254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.012414819s" Aug 12 23:43:10.258148 containerd[2013]: time="2025-08-12T23:43:10.257992670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 12 23:43:10.265522 containerd[2013]: time="2025-08-12T23:43:10.265470590Z" level=info msg="CreateContainer within sandbox \"86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 12 23:43:10.287254 containerd[2013]: time="2025-08-12T23:43:10.285359054Z" level=info msg="Container 80616abc8f0ce221db2fc52b02b6242e07a7a8a88c1f49b1088f849c586e0640: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:43:10.296943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount10021178.mount: Deactivated successfully. Aug 12 23:43:10.307200 containerd[2013]: time="2025-08-12T23:43:10.307145246Z" level=info msg="CreateContainer within sandbox \"86252cb6b63d8404b099b978716006d8911c299fccf11409b55c22fca90a5c5c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"80616abc8f0ce221db2fc52b02b6242e07a7a8a88c1f49b1088f849c586e0640\"" Aug 12 23:43:10.308329 containerd[2013]: time="2025-08-12T23:43:10.308277794Z" level=info msg="StartContainer for \"80616abc8f0ce221db2fc52b02b6242e07a7a8a88c1f49b1088f849c586e0640\"" Aug 12 23:43:10.310561 containerd[2013]: time="2025-08-12T23:43:10.310481402Z" level=info msg="connecting to shim 80616abc8f0ce221db2fc52b02b6242e07a7a8a88c1f49b1088f849c586e0640" address="unix:///run/containerd/s/e2e5996539b4dc45ee3f7120a24886aefafd91e675efaea54305aaad0e51340a" protocol=ttrpc version=3 Aug 12 23:43:10.350991 systemd[1]: Started cri-containerd-80616abc8f0ce221db2fc52b02b6242e07a7a8a88c1f49b1088f849c586e0640.scope - libcontainer container 80616abc8f0ce221db2fc52b02b6242e07a7a8a88c1f49b1088f849c586e0640. Aug 12 23:43:10.434472 containerd[2013]: time="2025-08-12T23:43:10.434247243Z" level=info msg="StartContainer for \"80616abc8f0ce221db2fc52b02b6242e07a7a8a88c1f49b1088f849c586e0640\" returns successfully" Aug 12 23:43:10.554984 kubelet[3515]: I0812 23:43:10.554572 3515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-dbgt6" podStartSLOduration=27.87831875 podStartE2EDuration="42.554542491s" podCreationTimestamp="2025-08-12 23:42:28 +0000 UTC" firstStartedPulling="2025-08-12 23:42:55.583732357 +0000 UTC m=+60.151911012" lastFinishedPulling="2025-08-12 23:43:10.259956086 +0000 UTC m=+74.828134753" observedRunningTime="2025-08-12 23:43:10.545793867 +0000 UTC m=+75.113972534" watchObservedRunningTime="2025-08-12 23:43:10.554542491 +0000 UTC m=+75.122721242" Aug 12 23:43:10.763267 containerd[2013]: time="2025-08-12T23:43:10.763171217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80616abc8f0ce221db2fc52b02b6242e07a7a8a88c1f49b1088f849c586e0640\" id:\"78b1b7df9c6b422ff18169a720a13058fb7a840bb88a6d1fda161fa8ac8be968\" pid:5897 exit_status:1 exited_at:{seconds:1755042190 nanos:762465701}" Aug 12 23:43:11.823711 containerd[2013]: time="2025-08-12T23:43:11.822596118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80616abc8f0ce221db2fc52b02b6242e07a7a8a88c1f49b1088f849c586e0640\" id:\"1eea7a943001ecddc3931429804059e0a2acff7d73ce70c081e17af11e434eb7\" pid:5923 exit_status:1 exited_at:{seconds:1755042191 nanos:822074346}" Aug 12 23:43:12.855021 systemd[1]: Started sshd@11-172.31.28.88:22-139.178.68.195:58750.service - OpenSSH per-connection server daemon (139.178.68.195:58750). Aug 12 23:43:13.066773 sshd[5935]: Accepted publickey for core from 139.178.68.195 port 58750 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:13.069901 sshd-session[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:13.079326 systemd-logind[1986]: New session 12 of user core. Aug 12 23:43:13.090526 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 12 23:43:13.278489 containerd[2013]: time="2025-08-12T23:43:13.277619741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb\" id:\"08501de596b653a4f77f72af2694573d214d2633027be300323d60bccdbca314\" pid:5952 exited_at:{seconds:1755042193 nanos:273834713}" Aug 12 23:43:13.384582 sshd[5937]: Connection closed by 139.178.68.195 port 58750 Aug 12 23:43:13.385478 sshd-session[5935]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:13.393449 systemd[1]: sshd@11-172.31.28.88:22-139.178.68.195:58750.service: Deactivated successfully. Aug 12 23:43:13.399354 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:43:13.403189 systemd-logind[1986]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:43:13.418276 systemd-logind[1986]: Removed session 12. Aug 12 23:43:13.420892 systemd[1]: Started sshd@12-172.31.28.88:22-139.178.68.195:58766.service - OpenSSH per-connection server daemon (139.178.68.195:58766). Aug 12 23:43:13.619151 sshd[5972]: Accepted publickey for core from 139.178.68.195 port 58766 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:13.621944 sshd-session[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:13.631899 systemd-logind[1986]: New session 13 of user core. Aug 12 23:43:13.637528 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 12 23:43:13.967964 sshd[5974]: Connection closed by 139.178.68.195 port 58766 Aug 12 23:43:13.968882 sshd-session[5972]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:13.980030 systemd[1]: sshd@12-172.31.28.88:22-139.178.68.195:58766.service: Deactivated successfully. Aug 12 23:43:13.984662 systemd[1]: session-13.scope: Deactivated successfully. Aug 12 23:43:13.991416 systemd-logind[1986]: Session 13 logged out. Waiting for processes to exit. Aug 12 23:43:14.022678 systemd[1]: Started sshd@13-172.31.28.88:22-139.178.68.195:58768.service - OpenSSH per-connection server daemon (139.178.68.195:58768). Aug 12 23:43:14.028643 systemd-logind[1986]: Removed session 13. Aug 12 23:43:14.221613 sshd[5983]: Accepted publickey for core from 139.178.68.195 port 58768 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:14.224053 sshd-session[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:14.234298 systemd-logind[1986]: New session 14 of user core. Aug 12 23:43:14.240502 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 12 23:43:14.495741 sshd[5985]: Connection closed by 139.178.68.195 port 58768 Aug 12 23:43:14.496983 sshd-session[5983]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:14.504724 systemd[1]: sshd@13-172.31.28.88:22-139.178.68.195:58768.service: Deactivated successfully. Aug 12 23:43:14.510173 systemd[1]: session-14.scope: Deactivated successfully. Aug 12 23:43:14.513711 systemd-logind[1986]: Session 14 logged out. Waiting for processes to exit. Aug 12 23:43:14.518989 systemd-logind[1986]: Removed session 14. Aug 12 23:43:17.188197 containerd[2013]: time="2025-08-12T23:43:17.188132396Z" level=info msg="TaskExit event in podsandbox handler container_id:\"425eb6361ef080a83462f52b6d58b492241f5cdbfe6ce1c7fc9d75a2676d753b\" id:\"ea6cd58fee49d62306a7472d22461680f400f578e32a5ba59fdeec714a01aec8\" pid:6010 exited_at:{seconds:1755042197 nanos:187776812}" Aug 12 23:43:19.537252 systemd[1]: Started sshd@14-172.31.28.88:22-139.178.68.195:58776.service - OpenSSH per-connection server daemon (139.178.68.195:58776). Aug 12 23:43:19.760767 sshd[6025]: Accepted publickey for core from 139.178.68.195 port 58776 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:19.764049 sshd-session[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:19.772695 systemd-logind[1986]: New session 15 of user core. Aug 12 23:43:19.781584 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 12 23:43:20.038000 sshd[6028]: Connection closed by 139.178.68.195 port 58776 Aug 12 23:43:20.038963 sshd-session[6025]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:20.045596 systemd[1]: sshd@14-172.31.28.88:22-139.178.68.195:58776.service: Deactivated successfully. Aug 12 23:43:20.046259 systemd-logind[1986]: Session 15 logged out. Waiting for processes to exit. Aug 12 23:43:20.050316 systemd[1]: session-15.scope: Deactivated successfully. Aug 12 23:43:20.056046 systemd-logind[1986]: Removed session 15. Aug 12 23:43:25.075902 systemd[1]: Started sshd@15-172.31.28.88:22-139.178.68.195:40870.service - OpenSSH per-connection server daemon (139.178.68.195:40870). Aug 12 23:43:25.283376 sshd[6043]: Accepted publickey for core from 139.178.68.195 port 40870 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:25.285925 sshd-session[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:25.294080 systemd-logind[1986]: New session 16 of user core. Aug 12 23:43:25.305543 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 12 23:43:25.550781 sshd[6045]: Connection closed by 139.178.68.195 port 40870 Aug 12 23:43:25.551464 sshd-session[6043]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:25.557649 systemd[1]: sshd@15-172.31.28.88:22-139.178.68.195:40870.service: Deactivated successfully. Aug 12 23:43:25.562547 systemd[1]: session-16.scope: Deactivated successfully. Aug 12 23:43:25.569061 systemd-logind[1986]: Session 16 logged out. Waiting for processes to exit. Aug 12 23:43:25.572625 systemd-logind[1986]: Removed session 16. Aug 12 23:43:30.590714 systemd[1]: Started sshd@16-172.31.28.88:22-139.178.68.195:60026.service - OpenSSH per-connection server daemon (139.178.68.195:60026). Aug 12 23:43:30.814506 sshd[6064]: Accepted publickey for core from 139.178.68.195 port 60026 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:30.819496 sshd-session[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:30.831160 systemd-logind[1986]: New session 17 of user core. Aug 12 23:43:30.837543 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 12 23:43:31.136923 sshd[6066]: Connection closed by 139.178.68.195 port 60026 Aug 12 23:43:31.139518 sshd-session[6064]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:31.147838 systemd[1]: sshd@16-172.31.28.88:22-139.178.68.195:60026.service: Deactivated successfully. Aug 12 23:43:31.153794 systemd[1]: session-17.scope: Deactivated successfully. Aug 12 23:43:31.162550 systemd-logind[1986]: Session 17 logged out. Waiting for processes to exit. Aug 12 23:43:31.167344 systemd-logind[1986]: Removed session 17. Aug 12 23:43:33.399274 containerd[2013]: time="2025-08-12T23:43:33.399172741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb\" id:\"6eef9452f61e7b7c761aeeda5e697426740bead8a655f295b5b17a959703aeb5\" pid:6094 exited_at:{seconds:1755042213 nanos:398206777}" Aug 12 23:43:36.183153 systemd[1]: Started sshd@17-172.31.28.88:22-139.178.68.195:60036.service - OpenSSH per-connection server daemon (139.178.68.195:60036). Aug 12 23:43:36.385153 sshd[6107]: Accepted publickey for core from 139.178.68.195 port 60036 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:36.387708 sshd-session[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:36.397328 systemd-logind[1986]: New session 18 of user core. Aug 12 23:43:36.402832 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 12 23:43:36.673001 sshd[6109]: Connection closed by 139.178.68.195 port 60036 Aug 12 23:43:36.674349 sshd-session[6107]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:36.682040 systemd[1]: sshd@17-172.31.28.88:22-139.178.68.195:60036.service: Deactivated successfully. Aug 12 23:43:36.686988 systemd[1]: session-18.scope: Deactivated successfully. Aug 12 23:43:36.690422 systemd-logind[1986]: Session 18 logged out. Waiting for processes to exit. Aug 12 23:43:36.711721 systemd-logind[1986]: Removed session 18. Aug 12 23:43:36.713753 systemd[1]: Started sshd@18-172.31.28.88:22-139.178.68.195:60052.service - OpenSSH per-connection server daemon (139.178.68.195:60052). Aug 12 23:43:36.925823 sshd[6120]: Accepted publickey for core from 139.178.68.195 port 60052 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:36.928433 sshd-session[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:36.938424 systemd-logind[1986]: New session 19 of user core. Aug 12 23:43:36.948549 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 12 23:43:37.553148 sshd[6122]: Connection closed by 139.178.68.195 port 60052 Aug 12 23:43:37.554141 sshd-session[6120]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:37.561730 systemd[1]: sshd@18-172.31.28.88:22-139.178.68.195:60052.service: Deactivated successfully. Aug 12 23:43:37.565992 systemd[1]: session-19.scope: Deactivated successfully. Aug 12 23:43:37.569032 systemd-logind[1986]: Session 19 logged out. Waiting for processes to exit. Aug 12 23:43:37.572917 systemd-logind[1986]: Removed session 19. Aug 12 23:43:37.591540 systemd[1]: Started sshd@19-172.31.28.88:22-139.178.68.195:60054.service - OpenSSH per-connection server daemon (139.178.68.195:60054). Aug 12 23:43:37.791479 sshd[6132]: Accepted publickey for core from 139.178.68.195 port 60054 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:37.794022 sshd-session[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:37.802545 systemd-logind[1986]: New session 20 of user core. Aug 12 23:43:37.810525 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 12 23:43:38.840258 sshd[6134]: Connection closed by 139.178.68.195 port 60054 Aug 12 23:43:38.843211 sshd-session[6132]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:38.856900 systemd[1]: sshd@19-172.31.28.88:22-139.178.68.195:60054.service: Deactivated successfully. Aug 12 23:43:38.866950 systemd[1]: session-20.scope: Deactivated successfully. Aug 12 23:43:38.871622 systemd-logind[1986]: Session 20 logged out. Waiting for processes to exit. Aug 12 23:43:38.895542 systemd[1]: Started sshd@20-172.31.28.88:22-139.178.68.195:60058.service - OpenSSH per-connection server daemon (139.178.68.195:60058). Aug 12 23:43:38.901031 systemd-logind[1986]: Removed session 20. Aug 12 23:43:39.113256 sshd[6153]: Accepted publickey for core from 139.178.68.195 port 60058 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:39.116371 sshd-session[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:39.124727 systemd-logind[1986]: New session 21 of user core. Aug 12 23:43:39.136507 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 12 23:43:39.681832 sshd[6157]: Connection closed by 139.178.68.195 port 60058 Aug 12 23:43:39.682702 sshd-session[6153]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:39.691647 systemd[1]: sshd@20-172.31.28.88:22-139.178.68.195:60058.service: Deactivated successfully. Aug 12 23:43:39.697180 systemd[1]: session-21.scope: Deactivated successfully. Aug 12 23:43:39.701616 systemd-logind[1986]: Session 21 logged out. Waiting for processes to exit. Aug 12 23:43:39.723090 systemd[1]: Started sshd@21-172.31.28.88:22-139.178.68.195:60066.service - OpenSSH per-connection server daemon (139.178.68.195:60066). Aug 12 23:43:39.725849 systemd-logind[1986]: Removed session 21. Aug 12 23:43:39.925168 sshd[6167]: Accepted publickey for core from 139.178.68.195 port 60066 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:39.927597 sshd-session[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:39.935963 systemd-logind[1986]: New session 22 of user core. Aug 12 23:43:39.944524 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 12 23:43:40.201474 sshd[6169]: Connection closed by 139.178.68.195 port 60066 Aug 12 23:43:40.202459 sshd-session[6167]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:40.210220 systemd-logind[1986]: Session 22 logged out. Waiting for processes to exit. Aug 12 23:43:40.211767 systemd[1]: sshd@21-172.31.28.88:22-139.178.68.195:60066.service: Deactivated successfully. Aug 12 23:43:40.215981 systemd[1]: session-22.scope: Deactivated successfully. Aug 12 23:43:40.220018 systemd-logind[1986]: Removed session 22. Aug 12 23:43:41.670699 containerd[2013]: time="2025-08-12T23:43:41.670614322Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80616abc8f0ce221db2fc52b02b6242e07a7a8a88c1f49b1088f849c586e0640\" id:\"8cf45e70da954f19a61471064d6c4ee3c430c2020a4af41da14c12bc2135fd5c\" pid:6193 exited_at:{seconds:1755042221 nanos:669689170}" Aug 12 23:43:45.247331 systemd[1]: Started sshd@22-172.31.28.88:22-139.178.68.195:59568.service - OpenSSH per-connection server daemon (139.178.68.195:59568). Aug 12 23:43:45.451582 sshd[6206]: Accepted publickey for core from 139.178.68.195 port 59568 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:45.454766 sshd-session[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:45.463886 systemd-logind[1986]: New session 23 of user core. Aug 12 23:43:45.469460 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 12 23:43:45.727279 sshd[6210]: Connection closed by 139.178.68.195 port 59568 Aug 12 23:43:45.728427 sshd-session[6206]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:45.737200 systemd[1]: sshd@22-172.31.28.88:22-139.178.68.195:59568.service: Deactivated successfully. Aug 12 23:43:45.740685 systemd[1]: session-23.scope: Deactivated successfully. Aug 12 23:43:45.743170 systemd-logind[1986]: Session 23 logged out. Waiting for processes to exit. Aug 12 23:43:45.747011 systemd-logind[1986]: Removed session 23. Aug 12 23:43:47.207368 containerd[2013]: time="2025-08-12T23:43:47.207180518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"425eb6361ef080a83462f52b6d58b492241f5cdbfe6ce1c7fc9d75a2676d753b\" id:\"623786cb1a1d0d98a2be2d95eeaaa201aa1a6e6977592e1b13eec1835b716655\" pid:6234 exited_at:{seconds:1755042227 nanos:206424674}" Aug 12 23:43:50.765921 systemd[1]: Started sshd@23-172.31.28.88:22-139.178.68.195:50064.service - OpenSSH per-connection server daemon (139.178.68.195:50064). Aug 12 23:43:50.962644 sshd[6248]: Accepted publickey for core from 139.178.68.195 port 50064 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:50.965246 sshd-session[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:50.974893 systemd-logind[1986]: New session 24 of user core. Aug 12 23:43:50.985544 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 12 23:43:51.232279 sshd[6250]: Connection closed by 139.178.68.195 port 50064 Aug 12 23:43:51.232158 sshd-session[6248]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:51.238605 systemd[1]: sshd@23-172.31.28.88:22-139.178.68.195:50064.service: Deactivated successfully. Aug 12 23:43:51.239067 systemd-logind[1986]: Session 24 logged out. Waiting for processes to exit. Aug 12 23:43:51.244580 systemd[1]: session-24.scope: Deactivated successfully. Aug 12 23:43:51.252510 systemd-logind[1986]: Removed session 24. Aug 12 23:43:56.269187 systemd[1]: Started sshd@24-172.31.28.88:22-139.178.68.195:50076.service - OpenSSH per-connection server daemon (139.178.68.195:50076). Aug 12 23:43:56.488801 sshd[6265]: Accepted publickey for core from 139.178.68.195 port 50076 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:43:56.494457 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:43:56.506995 systemd-logind[1986]: New session 25 of user core. Aug 12 23:43:56.516558 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 12 23:43:56.817422 sshd[6267]: Connection closed by 139.178.68.195 port 50076 Aug 12 23:43:56.820638 sshd-session[6265]: pam_unix(sshd:session): session closed for user core Aug 12 23:43:56.829778 systemd-logind[1986]: Session 25 logged out. Waiting for processes to exit. Aug 12 23:43:56.829848 systemd[1]: sshd@24-172.31.28.88:22-139.178.68.195:50076.service: Deactivated successfully. Aug 12 23:43:56.834809 systemd[1]: session-25.scope: Deactivated successfully. Aug 12 23:43:56.841822 systemd-logind[1986]: Removed session 25. Aug 12 23:44:01.859927 systemd[1]: Started sshd@25-172.31.28.88:22-139.178.68.195:56754.service - OpenSSH per-connection server daemon (139.178.68.195:56754). Aug 12 23:44:02.070916 sshd[6280]: Accepted publickey for core from 139.178.68.195 port 56754 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:44:02.074101 sshd-session[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:44:02.085990 systemd-logind[1986]: New session 26 of user core. Aug 12 23:44:02.092833 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 12 23:44:02.384125 sshd[6282]: Connection closed by 139.178.68.195 port 56754 Aug 12 23:44:02.385387 sshd-session[6280]: pam_unix(sshd:session): session closed for user core Aug 12 23:44:02.392363 systemd-logind[1986]: Session 26 logged out. Waiting for processes to exit. Aug 12 23:44:02.394745 systemd[1]: sshd@25-172.31.28.88:22-139.178.68.195:56754.service: Deactivated successfully. Aug 12 23:44:02.404111 systemd[1]: session-26.scope: Deactivated successfully. Aug 12 23:44:02.410910 systemd-logind[1986]: Removed session 26. Aug 12 23:44:03.384674 containerd[2013]: time="2025-08-12T23:44:03.384602226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb\" id:\"de3213201f8b481eca5772796311a84606a27953f94bf9f41b943c2ce9cad2bf\" pid:6306 exited_at:{seconds:1755042243 nanos:382187370}" Aug 12 23:44:05.214034 containerd[2013]: time="2025-08-12T23:44:05.213963667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80616abc8f0ce221db2fc52b02b6242e07a7a8a88c1f49b1088f849c586e0640\" id:\"ed8965b88bb1e754862b3f7d55288ad2f9ab053162f62c87292fd5a34e58fe88\" pid:6328 exited_at:{seconds:1755042245 nanos:213551335}" Aug 12 23:44:07.432545 systemd[1]: Started sshd@26-172.31.28.88:22-139.178.68.195:56760.service - OpenSSH per-connection server daemon (139.178.68.195:56760). Aug 12 23:44:07.648733 sshd[6339]: Accepted publickey for core from 139.178.68.195 port 56760 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:44:07.651759 sshd-session[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:44:07.664856 systemd-logind[1986]: New session 27 of user core. Aug 12 23:44:07.672522 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 12 23:44:07.980834 sshd[6341]: Connection closed by 139.178.68.195 port 56760 Aug 12 23:44:07.981969 sshd-session[6339]: pam_unix(sshd:session): session closed for user core Aug 12 23:44:07.989793 systemd[1]: sshd@26-172.31.28.88:22-139.178.68.195:56760.service: Deactivated successfully. Aug 12 23:44:07.995028 systemd[1]: session-27.scope: Deactivated successfully. Aug 12 23:44:07.999759 systemd-logind[1986]: Session 27 logged out. Waiting for processes to exit. Aug 12 23:44:08.006938 systemd-logind[1986]: Removed session 27. Aug 12 23:44:11.865918 containerd[2013]: time="2025-08-12T23:44:11.865837108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80616abc8f0ce221db2fc52b02b6242e07a7a8a88c1f49b1088f849c586e0640\" id:\"69c1c58833110ed29027985067d38e78783e767c38f6774698fcfc852d6fe302\" pid:6371 exited_at:{seconds:1755042251 nanos:863323936}" Aug 12 23:44:13.024692 systemd[1]: Started sshd@27-172.31.28.88:22-139.178.68.195:34816.service - OpenSSH per-connection server daemon (139.178.68.195:34816). Aug 12 23:44:13.258266 sshd[6382]: Accepted publickey for core from 139.178.68.195 port 34816 ssh2: RSA SHA256:SwPVXgr9Z3USoEIGaIVJgb3ucUVUAVJTtj2JVccGtMU Aug 12 23:44:13.262480 sshd-session[6382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:44:13.273274 systemd-logind[1986]: New session 28 of user core. Aug 12 23:44:13.283664 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 12 23:44:13.341686 containerd[2013]: time="2025-08-12T23:44:13.341607015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb\" id:\"7281346fa98e6e47b132fa8ddbe2671ab110f2a6a70decdf675742d2e3d8ed38\" pid:6397 exited_at:{seconds:1755042253 nanos:339781983}" Aug 12 23:44:13.582323 sshd[6403]: Connection closed by 139.178.68.195 port 34816 Aug 12 23:44:13.584446 sshd-session[6382]: pam_unix(sshd:session): session closed for user core Aug 12 23:44:13.590884 systemd[1]: sshd@27-172.31.28.88:22-139.178.68.195:34816.service: Deactivated successfully. Aug 12 23:44:13.599663 systemd[1]: session-28.scope: Deactivated successfully. Aug 12 23:44:13.605820 systemd-logind[1986]: Session 28 logged out. Waiting for processes to exit. Aug 12 23:44:13.611342 systemd-logind[1986]: Removed session 28. Aug 12 23:44:17.237733 containerd[2013]: time="2025-08-12T23:44:17.237668827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"425eb6361ef080a83462f52b6d58b492241f5cdbfe6ce1c7fc9d75a2676d753b\" id:\"750216e366b34d792b0d680510d7f2a3b515491fe030ad67586914a943c82897\" pid:6431 exited_at:{seconds:1755042257 nanos:237220531}" Aug 12 23:44:27.257953 systemd[1]: cri-containerd-6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f.scope: Deactivated successfully. Aug 12 23:44:27.258883 systemd[1]: cri-containerd-6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f.scope: Consumed 6.327s CPU time, 61.4M memory peak, 132K read from disk. Aug 12 23:44:27.267455 containerd[2013]: time="2025-08-12T23:44:27.267389153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f\" id:\"6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f\" pid:3191 exit_status:1 exited_at:{seconds:1755042267 nanos:266182157}" Aug 12 23:44:27.320952 containerd[2013]: time="2025-08-12T23:44:27.320785793Z" level=info msg="received exit event container_id:\"6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f\" id:\"6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f\" pid:3191 exit_status:1 exited_at:{seconds:1755042267 nanos:266182157}" Aug 12 23:44:27.375716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f-rootfs.mount: Deactivated successfully. Aug 12 23:44:27.841548 kubelet[3515]: I0812 23:44:27.841401 3515 scope.go:117] "RemoveContainer" containerID="6a678ba2e01b662789b624126d75e3a4abfbf96ab3bf7d8943130717848a282f" Aug 12 23:44:27.846477 containerd[2013]: time="2025-08-12T23:44:27.846420319Z" level=info msg="CreateContainer within sandbox \"dfa327baadda3764f237498798e196769889d732b46564b50924e98fe2033fe0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 12 23:44:27.867736 containerd[2013]: time="2025-08-12T23:44:27.867657572Z" level=info msg="Container 67155a4acd56c7acc57f4580c5fe8dafcb099c9453329e8c3d4a4c36fbbc4ef2: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:44:27.896507 containerd[2013]: time="2025-08-12T23:44:27.896450096Z" level=info msg="CreateContainer within sandbox \"dfa327baadda3764f237498798e196769889d732b46564b50924e98fe2033fe0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"67155a4acd56c7acc57f4580c5fe8dafcb099c9453329e8c3d4a4c36fbbc4ef2\"" Aug 12 23:44:27.897427 containerd[2013]: time="2025-08-12T23:44:27.897385844Z" level=info msg="StartContainer for \"67155a4acd56c7acc57f4580c5fe8dafcb099c9453329e8c3d4a4c36fbbc4ef2\"" Aug 12 23:44:27.899880 containerd[2013]: time="2025-08-12T23:44:27.899782364Z" level=info msg="connecting to shim 67155a4acd56c7acc57f4580c5fe8dafcb099c9453329e8c3d4a4c36fbbc4ef2" address="unix:///run/containerd/s/2b78382a9ca48531db966f8fa212d5ccdd2f8840046f68dc84878ad93e16ef36" protocol=ttrpc version=3 Aug 12 23:44:27.940611 systemd[1]: Started cri-containerd-67155a4acd56c7acc57f4580c5fe8dafcb099c9453329e8c3d4a4c36fbbc4ef2.scope - libcontainer container 67155a4acd56c7acc57f4580c5fe8dafcb099c9453329e8c3d4a4c36fbbc4ef2. Aug 12 23:44:28.048874 containerd[2013]: time="2025-08-12T23:44:28.048827848Z" level=info msg="StartContainer for \"67155a4acd56c7acc57f4580c5fe8dafcb099c9453329e8c3d4a4c36fbbc4ef2\" returns successfully" Aug 12 23:44:28.475583 systemd[1]: cri-containerd-9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337.scope: Deactivated successfully. Aug 12 23:44:28.477197 systemd[1]: cri-containerd-9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337.scope: Consumed 25.892s CPU time, 98.5M memory peak, 416K read from disk. Aug 12 23:44:28.490364 containerd[2013]: time="2025-08-12T23:44:28.490182883Z" level=info msg="received exit event container_id:\"9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337\" id:\"9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337\" pid:3831 exit_status:1 exited_at:{seconds:1755042268 nanos:489079147}" Aug 12 23:44:28.491364 containerd[2013]: time="2025-08-12T23:44:28.490928827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337\" id:\"9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337\" pid:3831 exit_status:1 exited_at:{seconds:1755042268 nanos:489079147}" Aug 12 23:44:28.544096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337-rootfs.mount: Deactivated successfully. Aug 12 23:44:28.856353 kubelet[3515]: I0812 23:44:28.855990 3515 scope.go:117] "RemoveContainer" containerID="9defb830f7b6bb0e16bdd404dffd1a2f54a6ccc4508582ad44631e204aa95337" Aug 12 23:44:28.860025 containerd[2013]: time="2025-08-12T23:44:28.859961324Z" level=info msg="CreateContainer within sandbox \"f81764757b478bee46d9e01d1ce3522024cb76b1adf57f0f95820661c79715f5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 12 23:44:28.881380 containerd[2013]: time="2025-08-12T23:44:28.879209865Z" level=info msg="Container 5c6f42d47e5ab3c82ffa18f58fd36398e362ce18e05e6004ed3b6aa437ab8519: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:44:28.904109 containerd[2013]: time="2025-08-12T23:44:28.904052277Z" level=info msg="CreateContainer within sandbox \"f81764757b478bee46d9e01d1ce3522024cb76b1adf57f0f95820661c79715f5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5c6f42d47e5ab3c82ffa18f58fd36398e362ce18e05e6004ed3b6aa437ab8519\"" Aug 12 23:44:28.905837 containerd[2013]: time="2025-08-12T23:44:28.905798733Z" level=info msg="StartContainer for \"5c6f42d47e5ab3c82ffa18f58fd36398e362ce18e05e6004ed3b6aa437ab8519\"" Aug 12 23:44:28.908247 containerd[2013]: time="2025-08-12T23:44:28.908163837Z" level=info msg="connecting to shim 5c6f42d47e5ab3c82ffa18f58fd36398e362ce18e05e6004ed3b6aa437ab8519" address="unix:///run/containerd/s/be343ad32399e8abe0de469965fab5d8e5c55749d810910ebd43de00227faa2e" protocol=ttrpc version=3 Aug 12 23:44:28.930332 kubelet[3515]: E0812 23:44:28.928659 3515 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-88?timeout=10s\": context deadline exceeded" Aug 12 23:44:28.957549 systemd[1]: Started cri-containerd-5c6f42d47e5ab3c82ffa18f58fd36398e362ce18e05e6004ed3b6aa437ab8519.scope - libcontainer container 5c6f42d47e5ab3c82ffa18f58fd36398e362ce18e05e6004ed3b6aa437ab8519. Aug 12 23:44:29.032170 containerd[2013]: time="2025-08-12T23:44:29.032100149Z" level=info msg="StartContainer for \"5c6f42d47e5ab3c82ffa18f58fd36398e362ce18e05e6004ed3b6aa437ab8519\" returns successfully" Aug 12 23:44:32.980697 systemd[1]: cri-containerd-e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b.scope: Deactivated successfully. Aug 12 23:44:32.981892 systemd[1]: cri-containerd-e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b.scope: Consumed 5.871s CPU time, 20.8M memory peak, 64K read from disk. Aug 12 23:44:32.985217 containerd[2013]: time="2025-08-12T23:44:32.983391133Z" level=info msg="received exit event container_id:\"e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b\" id:\"e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b\" pid:3163 exit_status:1 exited_at:{seconds:1755042272 nanos:982782685}" Aug 12 23:44:32.985217 containerd[2013]: time="2025-08-12T23:44:32.984881617Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b\" id:\"e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b\" pid:3163 exit_status:1 exited_at:{seconds:1755042272 nanos:982782685}" Aug 12 23:44:33.032929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b-rootfs.mount: Deactivated successfully. Aug 12 23:44:33.360554 containerd[2013]: time="2025-08-12T23:44:33.360505415Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efafa75b35c67f1344cb10ef40b9b22a7a50b63de8c60c7d5c179d90bbea60fb\" id:\"387ff17c63ee3e1da61edb0969796c1b0501f976d494c92f944d49ff97e95b44\" pid:6578 exit_status:1 exited_at:{seconds:1755042273 nanos:360116423}" Aug 12 23:44:33.882017 kubelet[3515]: I0812 23:44:33.881947 3515 scope.go:117] "RemoveContainer" containerID="e0f7ad1d518eca4220dcec036c6372238459fda39c51172ab3362e0dcfea742b" Aug 12 23:44:33.886924 containerd[2013]: time="2025-08-12T23:44:33.886860457Z" level=info msg="CreateContainer within sandbox \"fafc9c8446e34a4a09dfe2f98c5047cfb1b8a098cfa780911638ae4f3da658c3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 12 23:44:33.906911 containerd[2013]: time="2025-08-12T23:44:33.904522814Z" level=info msg="Container ea7eb5c93af51d927002edfbf974e51c7dc171a5ffcf6e725cf98a2a93379fd5: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:44:33.923041 containerd[2013]: time="2025-08-12T23:44:33.922938530Z" level=info msg="CreateContainer within sandbox \"fafc9c8446e34a4a09dfe2f98c5047cfb1b8a098cfa780911638ae4f3da658c3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ea7eb5c93af51d927002edfbf974e51c7dc171a5ffcf6e725cf98a2a93379fd5\"" Aug 12 23:44:33.924582 containerd[2013]: time="2025-08-12T23:44:33.924516038Z" level=info msg="StartContainer for \"ea7eb5c93af51d927002edfbf974e51c7dc171a5ffcf6e725cf98a2a93379fd5\"" Aug 12 23:44:33.926780 containerd[2013]: time="2025-08-12T23:44:33.926709362Z" level=info msg="connecting to shim ea7eb5c93af51d927002edfbf974e51c7dc171a5ffcf6e725cf98a2a93379fd5" address="unix:///run/containerd/s/4a10b7a2d55711425141f3c1f3cb083e387dc8b51bbbbb259c47eda2645aa850" protocol=ttrpc version=3 Aug 12 23:44:33.971829 systemd[1]: Started cri-containerd-ea7eb5c93af51d927002edfbf974e51c7dc171a5ffcf6e725cf98a2a93379fd5.scope - libcontainer container ea7eb5c93af51d927002edfbf974e51c7dc171a5ffcf6e725cf98a2a93379fd5. Aug 12 23:44:34.055960 containerd[2013]: time="2025-08-12T23:44:34.055763614Z" level=info msg="StartContainer for \"ea7eb5c93af51d927002edfbf974e51c7dc171a5ffcf6e725cf98a2a93379fd5\" returns successfully"