Sep 10 23:48:34.111555 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 10 23:48:34.111655 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 10 22:24:03 -00 2025 Sep 10 23:48:34.111682 kernel: KASLR disabled due to lack of seed Sep 10 23:48:34.111699 kernel: efi: EFI v2.7 by EDK II Sep 10 23:48:34.111715 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 10 23:48:34.111730 kernel: secureboot: Secure boot disabled Sep 10 23:48:34.111747 kernel: ACPI: Early table checksum verification disabled Sep 10 23:48:34.111762 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 10 23:48:34.111777 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 10 23:48:34.111792 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 10 23:48:34.111807 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 10 23:48:34.111826 kernel: ACPI: FACS 0x0000000078630000 000040 Sep 10 23:48:34.111841 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 10 23:48:34.111856 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 10 23:48:34.111875 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 10 23:48:34.111893 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 10 23:48:34.111943 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 10 23:48:34.111980 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 10 23:48:34.112022 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 10 23:48:34.112055 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 10 23:48:34.112088 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 10 23:48:34.112110 kernel: printk: legacy bootconsole [uart0] enabled Sep 10 23:48:34.112129 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 23:48:34.112148 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 10 23:48:34.112167 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Sep 10 23:48:34.112183 kernel: Zone ranges: Sep 10 23:48:34.112199 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 10 23:48:34.112221 kernel: DMA32 empty Sep 10 23:48:34.112237 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 10 23:48:34.112252 kernel: Device empty Sep 10 23:48:34.112267 kernel: Movable zone start for each node Sep 10 23:48:34.112282 kernel: Early memory node ranges Sep 10 23:48:34.112297 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 10 23:48:34.112312 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 10 23:48:34.112327 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 10 23:48:34.112343 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 10 23:48:34.112358 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 10 23:48:34.112373 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 10 23:48:34.112388 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 10 23:48:34.112409 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 10 23:48:34.112431 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 10 23:48:34.112448 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 10 23:48:34.112465 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Sep 10 23:48:34.112482 kernel: psci: probing for conduit method from ACPI. Sep 10 23:48:34.112502 kernel: psci: PSCIv1.0 detected in firmware. Sep 10 23:48:34.112519 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 23:48:34.112535 kernel: psci: Trusted OS migration not required Sep 10 23:48:34.112551 kernel: psci: SMC Calling Convention v1.1 Sep 10 23:48:34.112567 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 10 23:48:34.112624 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 23:48:34.112644 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 23:48:34.112660 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 10 23:48:34.112676 kernel: Detected PIPT I-cache on CPU0 Sep 10 23:48:34.112693 kernel: CPU features: detected: GIC system register CPU interface Sep 10 23:48:34.112709 kernel: CPU features: detected: Spectre-v2 Sep 10 23:48:34.112731 kernel: CPU features: detected: Spectre-v3a Sep 10 23:48:34.112748 kernel: CPU features: detected: Spectre-BHB Sep 10 23:48:34.112764 kernel: CPU features: detected: ARM erratum 1742098 Sep 10 23:48:34.112781 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 10 23:48:34.112797 kernel: alternatives: applying boot alternatives Sep 10 23:48:34.112815 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:48:34.112833 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 23:48:34.112850 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 23:48:34.112866 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 23:48:34.112882 kernel: Fallback order for Node 0: 0 Sep 10 23:48:34.112902 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Sep 10 23:48:34.112919 kernel: Policy zone: Normal Sep 10 23:48:34.112935 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 23:48:34.112951 kernel: software IO TLB: area num 2. Sep 10 23:48:34.112967 kernel: software IO TLB: mapped [mem 0x000000006c5f0000-0x00000000705f0000] (64MB) Sep 10 23:48:34.112984 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 10 23:48:34.113000 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 23:48:34.113017 kernel: rcu: RCU event tracing is enabled. Sep 10 23:48:34.113033 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 10 23:48:34.113050 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 23:48:34.113067 kernel: Tracing variant of Tasks RCU enabled. Sep 10 23:48:34.113083 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 23:48:34.113103 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 10 23:48:34.113120 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 10 23:48:34.113136 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 10 23:48:34.113152 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 23:48:34.113169 kernel: GICv3: 96 SPIs implemented Sep 10 23:48:34.113186 kernel: GICv3: 0 Extended SPIs implemented Sep 10 23:48:34.113202 kernel: Root IRQ handler: gic_handle_irq Sep 10 23:48:34.113218 kernel: GICv3: GICv3 features: 16 PPIs Sep 10 23:48:34.113234 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 23:48:34.113250 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 10 23:48:34.113266 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 10 23:48:34.113282 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Sep 10 23:48:34.113303 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Sep 10 23:48:34.113319 kernel: GICv3: using LPI property table @0x0000000400110000 Sep 10 23:48:34.113335 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 10 23:48:34.113351 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Sep 10 23:48:34.113367 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 23:48:34.113383 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 10 23:48:34.113400 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 10 23:48:34.113416 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 10 23:48:34.113433 kernel: Console: colour dummy device 80x25 Sep 10 23:48:34.113450 kernel: printk: legacy console [tty1] enabled Sep 10 23:48:34.113466 kernel: ACPI: Core revision 20240827 Sep 10 23:48:34.113487 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 10 23:48:34.113504 kernel: pid_max: default: 32768 minimum: 301 Sep 10 23:48:34.113521 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 23:48:34.113537 kernel: landlock: Up and running. Sep 10 23:48:34.113554 kernel: SELinux: Initializing. Sep 10 23:48:34.113571 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:48:34.113608 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:48:34.113626 kernel: rcu: Hierarchical SRCU implementation. Sep 10 23:48:34.113644 kernel: rcu: Max phase no-delay instances is 400. Sep 10 23:48:34.113667 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 23:48:34.113683 kernel: Remapping and enabling EFI services. Sep 10 23:48:34.113700 kernel: smp: Bringing up secondary CPUs ... Sep 10 23:48:34.113717 kernel: Detected PIPT I-cache on CPU1 Sep 10 23:48:34.113733 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 10 23:48:34.113750 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Sep 10 23:48:34.113766 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 10 23:48:34.113782 kernel: smp: Brought up 1 node, 2 CPUs Sep 10 23:48:34.113799 kernel: SMP: Total of 2 processors activated. Sep 10 23:48:34.113828 kernel: CPU: All CPU(s) started at EL1 Sep 10 23:48:34.113846 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 23:48:34.113867 kernel: CPU features: detected: 32-bit EL1 Support Sep 10 23:48:34.113885 kernel: CPU features: detected: CRC32 instructions Sep 10 23:48:34.113902 kernel: alternatives: applying system-wide alternatives Sep 10 23:48:34.113920 kernel: Memory: 3797032K/4030464K available (11136K kernel code, 2436K rwdata, 9084K rodata, 38976K init, 1038K bss, 212088K reserved, 16384K cma-reserved) Sep 10 23:48:34.113938 kernel: devtmpfs: initialized Sep 10 23:48:34.114017 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 23:48:34.114038 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 10 23:48:34.114056 kernel: 17040 pages in range for non-PLT usage Sep 10 23:48:34.114073 kernel: 508560 pages in range for PLT usage Sep 10 23:48:34.114091 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 23:48:34.114108 kernel: SMBIOS 3.0.0 present. Sep 10 23:48:34.114125 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 10 23:48:34.114143 kernel: DMI: Memory slots populated: 0/0 Sep 10 23:48:34.114160 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 23:48:34.114182 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 23:48:34.114200 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 23:48:34.114218 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 23:48:34.114235 kernel: audit: initializing netlink subsys (disabled) Sep 10 23:48:34.114252 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Sep 10 23:48:34.114269 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 23:48:34.114287 kernel: cpuidle: using governor menu Sep 10 23:48:34.114304 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 23:48:34.114321 kernel: ASID allocator initialised with 65536 entries Sep 10 23:48:34.114343 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 23:48:34.114360 kernel: Serial: AMBA PL011 UART driver Sep 10 23:48:34.114378 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 23:48:34.114395 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 23:48:34.114413 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 23:48:34.114430 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 23:48:34.114448 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 23:48:34.114465 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 23:48:34.114482 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 23:48:34.114503 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 23:48:34.114521 kernel: ACPI: Added _OSI(Module Device) Sep 10 23:48:34.114538 kernel: ACPI: Added _OSI(Processor Device) Sep 10 23:48:34.114556 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 23:48:34.114573 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 23:48:34.114612 kernel: ACPI: Interpreter enabled Sep 10 23:48:34.114630 kernel: ACPI: Using GIC for interrupt routing Sep 10 23:48:34.114648 kernel: ACPI: MCFG table detected, 1 entries Sep 10 23:48:34.114665 kernel: ACPI: CPU0 has been hot-added Sep 10 23:48:34.114688 kernel: ACPI: CPU1 has been hot-added Sep 10 23:48:34.114706 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 10 23:48:34.115436 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 23:48:34.117727 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 23:48:34.117945 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 23:48:34.118158 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 10 23:48:34.118347 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 10 23:48:34.118383 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 10 23:48:34.118403 kernel: acpiphp: Slot [1] registered Sep 10 23:48:34.118421 kernel: acpiphp: Slot [2] registered Sep 10 23:48:34.118439 kernel: acpiphp: Slot [3] registered Sep 10 23:48:34.118456 kernel: acpiphp: Slot [4] registered Sep 10 23:48:34.118474 kernel: acpiphp: Slot [5] registered Sep 10 23:48:34.118491 kernel: acpiphp: Slot [6] registered Sep 10 23:48:34.118508 kernel: acpiphp: Slot [7] registered Sep 10 23:48:34.118526 kernel: acpiphp: Slot [8] registered Sep 10 23:48:34.118543 kernel: acpiphp: Slot [9] registered Sep 10 23:48:34.118565 kernel: acpiphp: Slot [10] registered Sep 10 23:48:34.118623 kernel: acpiphp: Slot [11] registered Sep 10 23:48:34.118644 kernel: acpiphp: Slot [12] registered Sep 10 23:48:34.118662 kernel: acpiphp: Slot [13] registered Sep 10 23:48:34.118680 kernel: acpiphp: Slot [14] registered Sep 10 23:48:34.118698 kernel: acpiphp: Slot [15] registered Sep 10 23:48:34.118715 kernel: acpiphp: Slot [16] registered Sep 10 23:48:34.118733 kernel: acpiphp: Slot [17] registered Sep 10 23:48:34.118751 kernel: acpiphp: Slot [18] registered Sep 10 23:48:34.118775 kernel: acpiphp: Slot [19] registered Sep 10 23:48:34.118793 kernel: acpiphp: Slot [20] registered Sep 10 23:48:34.118810 kernel: acpiphp: Slot [21] registered Sep 10 23:48:34.118828 kernel: acpiphp: Slot [22] registered Sep 10 23:48:34.118845 kernel: acpiphp: Slot [23] registered Sep 10 23:48:34.118863 kernel: acpiphp: Slot [24] registered Sep 10 23:48:34.118881 kernel: acpiphp: Slot [25] registered Sep 10 23:48:34.118899 kernel: acpiphp: Slot [26] registered Sep 10 23:48:34.118917 kernel: acpiphp: Slot [27] registered Sep 10 23:48:34.118935 kernel: acpiphp: Slot [28] registered Sep 10 23:48:34.121638 kernel: acpiphp: Slot [29] registered Sep 10 23:48:34.121678 kernel: acpiphp: Slot [30] registered Sep 10 23:48:34.121697 kernel: acpiphp: Slot [31] registered Sep 10 23:48:34.121715 kernel: PCI host bridge to bus 0000:00 Sep 10 23:48:34.122023 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 10 23:48:34.122207 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 23:48:34.122378 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 10 23:48:34.122559 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 10 23:48:34.122808 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Sep 10 23:48:34.123025 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Sep 10 23:48:34.123219 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Sep 10 23:48:34.123498 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Sep 10 23:48:34.126333 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Sep 10 23:48:34.126551 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 10 23:48:34.128910 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Sep 10 23:48:34.129123 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Sep 10 23:48:34.129322 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Sep 10 23:48:34.129518 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Sep 10 23:48:34.129755 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 10 23:48:34.129964 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Sep 10 23:48:34.130166 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Sep 10 23:48:34.130369 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Sep 10 23:48:34.130559 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Sep 10 23:48:34.130782 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Sep 10 23:48:34.130962 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 10 23:48:34.131132 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 23:48:34.131301 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 10 23:48:34.131331 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 23:48:34.131350 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 23:48:34.131368 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 23:48:34.131386 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 23:48:34.131403 kernel: iommu: Default domain type: Translated Sep 10 23:48:34.131421 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 23:48:34.131438 kernel: efivars: Registered efivars operations Sep 10 23:48:34.131456 kernel: vgaarb: loaded Sep 10 23:48:34.131473 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 23:48:34.131491 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 23:48:34.131514 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 23:48:34.131532 kernel: pnp: PnP ACPI init Sep 10 23:48:34.133373 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 10 23:48:34.133420 kernel: pnp: PnP ACPI: found 1 devices Sep 10 23:48:34.133439 kernel: NET: Registered PF_INET protocol family Sep 10 23:48:34.133457 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 23:48:34.133475 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 23:48:34.133493 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 23:48:34.133521 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 23:48:34.133539 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 23:48:34.133557 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 23:48:34.133575 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:48:34.133627 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:48:34.133645 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 23:48:34.133663 kernel: PCI: CLS 0 bytes, default 64 Sep 10 23:48:34.133681 kernel: kvm [1]: HYP mode not available Sep 10 23:48:34.133699 kernel: Initialise system trusted keyrings Sep 10 23:48:34.133724 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 23:48:34.133743 kernel: Key type asymmetric registered Sep 10 23:48:34.133762 kernel: Asymmetric key parser 'x509' registered Sep 10 23:48:34.133781 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 23:48:34.133801 kernel: io scheduler mq-deadline registered Sep 10 23:48:34.133820 kernel: io scheduler kyber registered Sep 10 23:48:34.133839 kernel: io scheduler bfq registered Sep 10 23:48:34.134147 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 10 23:48:34.134193 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 23:48:34.134213 kernel: ACPI: button: Power Button [PWRB] Sep 10 23:48:34.134232 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 10 23:48:34.134257 kernel: ACPI: button: Sleep Button [SLPB] Sep 10 23:48:34.134275 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 23:48:34.134295 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 10 23:48:34.134513 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 10 23:48:34.134542 kernel: printk: legacy console [ttyS0] disabled Sep 10 23:48:34.134561 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 10 23:48:34.134622 kernel: printk: legacy console [ttyS0] enabled Sep 10 23:48:34.134643 kernel: printk: legacy bootconsole [uart0] disabled Sep 10 23:48:34.134661 kernel: thunder_xcv, ver 1.0 Sep 10 23:48:34.134678 kernel: thunder_bgx, ver 1.0 Sep 10 23:48:34.134697 kernel: nicpf, ver 1.0 Sep 10 23:48:34.134714 kernel: nicvf, ver 1.0 Sep 10 23:48:34.135679 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 23:48:34.135866 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T23:48:33 UTC (1757548113) Sep 10 23:48:34.135898 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 23:48:34.135917 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Sep 10 23:48:34.135935 kernel: NET: Registered PF_INET6 protocol family Sep 10 23:48:34.135952 kernel: watchdog: NMI not fully supported Sep 10 23:48:34.135969 kernel: Segment Routing with IPv6 Sep 10 23:48:34.135987 kernel: watchdog: Hard watchdog permanently disabled Sep 10 23:48:34.136004 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 23:48:34.136021 kernel: NET: Registered PF_PACKET protocol family Sep 10 23:48:34.136038 kernel: Key type dns_resolver registered Sep 10 23:48:34.136060 kernel: registered taskstats version 1 Sep 10 23:48:34.136078 kernel: Loading compiled-in X.509 certificates Sep 10 23:48:34.136095 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 3c20aab1105575c84ea94c1a59a27813fcebdea7' Sep 10 23:48:34.136113 kernel: Demotion targets for Node 0: null Sep 10 23:48:34.136130 kernel: Key type .fscrypt registered Sep 10 23:48:34.136147 kernel: Key type fscrypt-provisioning registered Sep 10 23:48:34.136164 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 23:48:34.136181 kernel: ima: Allocated hash algorithm: sha1 Sep 10 23:48:34.136198 kernel: ima: No architecture policies found Sep 10 23:48:34.136220 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 23:48:34.136238 kernel: clk: Disabling unused clocks Sep 10 23:48:34.136255 kernel: PM: genpd: Disabling unused power domains Sep 10 23:48:34.136272 kernel: Warning: unable to open an initial console. Sep 10 23:48:34.136290 kernel: Freeing unused kernel memory: 38976K Sep 10 23:48:34.136307 kernel: Run /init as init process Sep 10 23:48:34.136324 kernel: with arguments: Sep 10 23:48:34.136341 kernel: /init Sep 10 23:48:34.136358 kernel: with environment: Sep 10 23:48:34.136375 kernel: HOME=/ Sep 10 23:48:34.136396 kernel: TERM=linux Sep 10 23:48:34.136413 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 23:48:34.136432 systemd[1]: Successfully made /usr/ read-only. Sep 10 23:48:34.136456 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:48:34.136476 systemd[1]: Detected virtualization amazon. Sep 10 23:48:34.136494 systemd[1]: Detected architecture arm64. Sep 10 23:48:34.136513 systemd[1]: Running in initrd. Sep 10 23:48:34.136535 systemd[1]: No hostname configured, using default hostname. Sep 10 23:48:34.136555 systemd[1]: Hostname set to . Sep 10 23:48:34.136573 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:48:34.136614 systemd[1]: Queued start job for default target initrd.target. Sep 10 23:48:34.136634 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:48:34.136654 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:48:34.136674 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 23:48:34.136694 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:48:34.136719 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 23:48:34.136740 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 23:48:34.136761 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 23:48:34.136780 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 23:48:34.136800 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:48:34.136819 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:48:34.136838 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:48:34.136861 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:48:34.136880 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:48:34.136899 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:48:34.136919 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:48:34.136938 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:48:34.136957 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 23:48:34.136976 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 23:48:34.136995 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:48:34.137018 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:48:34.137037 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:48:34.137057 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:48:34.137076 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 23:48:34.137095 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:48:34.137114 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 23:48:34.137133 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 23:48:34.137153 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 23:48:34.137171 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:48:34.137194 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:48:34.137214 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:48:34.137233 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 23:48:34.137254 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:48:34.137277 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 23:48:34.137297 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:48:34.137353 systemd-journald[257]: Collecting audit messages is disabled. Sep 10 23:48:34.137396 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:48:34.137421 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:48:34.137455 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 23:48:34.137479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:48:34.137498 kernel: Bridge firewalling registered Sep 10 23:48:34.137517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:48:34.137537 systemd-journald[257]: Journal started Sep 10 23:48:34.137592 systemd-journald[257]: Runtime Journal (/run/log/journal/ec21f416dce29f348abfd1d8d2725251) is 8M, max 75.3M, 67.3M free. Sep 10 23:48:34.071996 systemd-modules-load[258]: Inserted module 'overlay' Sep 10 23:48:34.127296 systemd-modules-load[258]: Inserted module 'br_netfilter' Sep 10 23:48:34.146605 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:48:34.154095 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:48:34.165548 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:48:34.169770 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:48:34.191840 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:48:34.207103 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:48:34.216366 systemd-tmpfiles[282]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 23:48:34.222809 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 23:48:34.233657 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:48:34.241211 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:48:34.252228 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:48:34.282266 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:48:34.353485 systemd-resolved[299]: Positive Trust Anchors: Sep 10 23:48:34.353524 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:48:34.353772 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:48:34.453622 kernel: SCSI subsystem initialized Sep 10 23:48:34.461620 kernel: Loading iSCSI transport class v2.0-870. Sep 10 23:48:34.474620 kernel: iscsi: registered transport (tcp) Sep 10 23:48:34.495758 kernel: iscsi: registered transport (qla4xxx) Sep 10 23:48:34.495831 kernel: QLogic iSCSI HBA Driver Sep 10 23:48:34.528752 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:48:34.559338 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:48:34.570071 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:48:34.629864 kernel: random: crng init done Sep 10 23:48:34.630423 systemd-resolved[299]: Defaulting to hostname 'linux'. Sep 10 23:48:34.634256 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:48:34.642250 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:48:34.662660 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 23:48:34.667058 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 23:48:34.757642 kernel: raid6: neonx8 gen() 6632 MB/s Sep 10 23:48:34.774614 kernel: raid6: neonx4 gen() 6580 MB/s Sep 10 23:48:34.791613 kernel: raid6: neonx2 gen() 5469 MB/s Sep 10 23:48:34.808614 kernel: raid6: neonx1 gen() 3960 MB/s Sep 10 23:48:34.825613 kernel: raid6: int64x8 gen() 3660 MB/s Sep 10 23:48:34.842614 kernel: raid6: int64x4 gen() 3723 MB/s Sep 10 23:48:34.859616 kernel: raid6: int64x2 gen() 3609 MB/s Sep 10 23:48:34.877565 kernel: raid6: int64x1 gen() 2758 MB/s Sep 10 23:48:34.877623 kernel: raid6: using algorithm neonx8 gen() 6632 MB/s Sep 10 23:48:34.895615 kernel: raid6: .... xor() 4614 MB/s, rmw enabled Sep 10 23:48:34.895652 kernel: raid6: using neon recovery algorithm Sep 10 23:48:34.904222 kernel: xor: measuring software checksum speed Sep 10 23:48:34.904273 kernel: 8regs : 12943 MB/sec Sep 10 23:48:34.905377 kernel: 32regs : 13043 MB/sec Sep 10 23:48:34.907707 kernel: arm64_neon : 8623 MB/sec Sep 10 23:48:34.907740 kernel: xor: using function: 32regs (13043 MB/sec) Sep 10 23:48:35.000656 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 23:48:35.011398 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:48:35.015786 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:48:35.081241 systemd-udevd[507]: Using default interface naming scheme 'v255'. Sep 10 23:48:35.091240 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:48:35.096176 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 23:48:35.141353 dracut-pre-trigger[513]: rd.md=0: removing MD RAID activation Sep 10 23:48:35.184988 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:48:35.189514 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:48:35.331777 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:48:35.343544 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 23:48:35.469500 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 23:48:35.469626 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 10 23:48:35.478045 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 10 23:48:35.478574 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 10 23:48:35.499871 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:2e:7c:da:94:43 Sep 10 23:48:35.516757 (udev-worker)[555]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:48:35.522029 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 10 23:48:35.525220 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:48:35.530289 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:48:35.537629 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 10 23:48:35.541390 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:48:35.546895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:48:35.552367 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:48:35.563634 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 10 23:48:35.570736 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 23:48:35.570784 kernel: GPT:9289727 != 16777215 Sep 10 23:48:35.570810 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 23:48:35.572733 kernel: GPT:9289727 != 16777215 Sep 10 23:48:35.574147 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 23:48:35.575670 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 10 23:48:35.602689 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:48:35.616622 kernel: nvme nvme0: using unchecked data buffer Sep 10 23:48:35.778873 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 10 23:48:35.803453 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 10 23:48:35.810043 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 23:48:35.835545 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 10 23:48:35.870191 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 10 23:48:35.875999 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 10 23:48:35.886206 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:48:35.894930 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:48:35.897847 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:48:35.906788 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 23:48:35.916611 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 23:48:35.939993 disk-uuid[687]: Primary Header is updated. Sep 10 23:48:35.939993 disk-uuid[687]: Secondary Entries is updated. Sep 10 23:48:35.939993 disk-uuid[687]: Secondary Header is updated. Sep 10 23:48:35.958728 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 10 23:48:35.963048 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:48:35.972618 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 10 23:48:36.979867 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 10 23:48:36.982760 disk-uuid[689]: The operation has completed successfully. Sep 10 23:48:37.163023 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 23:48:37.165436 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 23:48:37.263407 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 23:48:37.283874 sh[956]: Success Sep 10 23:48:37.312672 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 23:48:37.312748 kernel: device-mapper: uevent: version 1.0.3 Sep 10 23:48:37.314617 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 23:48:37.326677 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 23:48:37.430740 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 23:48:37.438431 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 23:48:37.460706 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 23:48:37.485622 kernel: BTRFS: device fsid 3b17f37f-d395-4116-a46d-e07f86112ade devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (979) Sep 10 23:48:37.489164 kernel: BTRFS info (device dm-0): first mount of filesystem 3b17f37f-d395-4116-a46d-e07f86112ade Sep 10 23:48:37.489207 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:48:37.537494 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 10 23:48:37.537559 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 23:48:37.538815 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 23:48:37.551870 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 23:48:37.556019 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:48:37.560868 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 23:48:37.566292 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 23:48:37.575811 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 23:48:37.630636 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1015) Sep 10 23:48:37.635955 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:48:37.636027 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:48:37.646409 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 10 23:48:37.646482 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 10 23:48:37.655680 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:48:37.660424 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 23:48:37.667286 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 23:48:37.759706 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:48:37.766571 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:48:37.837087 systemd-networkd[1148]: lo: Link UP Sep 10 23:48:37.837100 systemd-networkd[1148]: lo: Gained carrier Sep 10 23:48:37.843115 systemd-networkd[1148]: Enumeration completed Sep 10 23:48:37.843426 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:48:37.847400 systemd[1]: Reached target network.target - Network. Sep 10 23:48:37.853808 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:48:37.853827 systemd-networkd[1148]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:48:37.864130 systemd-networkd[1148]: eth0: Link UP Sep 10 23:48:37.864151 systemd-networkd[1148]: eth0: Gained carrier Sep 10 23:48:37.864174 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:48:37.900664 systemd-networkd[1148]: eth0: DHCPv4 address 172.31.28.68/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 10 23:48:38.188116 ignition[1075]: Ignition 2.21.0 Sep 10 23:48:38.188646 ignition[1075]: Stage: fetch-offline Sep 10 23:48:38.189548 ignition[1075]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:38.189570 ignition[1075]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:38.191334 ignition[1075]: Ignition finished successfully Sep 10 23:48:38.200028 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:48:38.205768 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 10 23:48:38.260126 ignition[1160]: Ignition 2.21.0 Sep 10 23:48:38.260159 ignition[1160]: Stage: fetch Sep 10 23:48:38.261841 ignition[1160]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:38.261983 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:38.263315 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:38.281103 ignition[1160]: PUT result: OK Sep 10 23:48:38.289471 ignition[1160]: parsed url from cmdline: "" Sep 10 23:48:38.289495 ignition[1160]: no config URL provided Sep 10 23:48:38.289512 ignition[1160]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:48:38.289563 ignition[1160]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:48:38.289629 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:38.295836 ignition[1160]: PUT result: OK Sep 10 23:48:38.295926 ignition[1160]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 10 23:48:38.298488 ignition[1160]: GET result: OK Sep 10 23:48:38.298679 ignition[1160]: parsing config with SHA512: dfb97c0b9be2ca23957e51cdf175074f0e6d45f508cbaf791242902fb6579df2b94ac7f11c4822d0b6549f51c6cbcb8712cc14162ff5d3d54b99177b230dc23c Sep 10 23:48:38.310267 unknown[1160]: fetched base config from "system" Sep 10 23:48:38.310295 unknown[1160]: fetched base config from "system" Sep 10 23:48:38.311050 ignition[1160]: fetch: fetch complete Sep 10 23:48:38.310308 unknown[1160]: fetched user config from "aws" Sep 10 23:48:38.311062 ignition[1160]: fetch: fetch passed Sep 10 23:48:38.311155 ignition[1160]: Ignition finished successfully Sep 10 23:48:38.322849 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 10 23:48:38.331972 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 23:48:38.374937 ignition[1167]: Ignition 2.21.0 Sep 10 23:48:38.375467 ignition[1167]: Stage: kargs Sep 10 23:48:38.376053 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:38.376075 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:38.376239 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:38.385161 ignition[1167]: PUT result: OK Sep 10 23:48:38.389953 ignition[1167]: kargs: kargs passed Sep 10 23:48:38.390641 ignition[1167]: Ignition finished successfully Sep 10 23:48:38.396668 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 23:48:38.403137 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 23:48:38.438931 ignition[1173]: Ignition 2.21.0 Sep 10 23:48:38.438962 ignition[1173]: Stage: disks Sep 10 23:48:38.439489 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:38.439513 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:38.439693 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:38.442676 ignition[1173]: PUT result: OK Sep 10 23:48:38.455306 ignition[1173]: disks: disks passed Sep 10 23:48:38.455396 ignition[1173]: Ignition finished successfully Sep 10 23:48:38.461932 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 23:48:38.466887 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 23:48:38.469464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 23:48:38.477403 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:48:38.479651 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:48:38.482041 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:48:38.492570 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 23:48:38.552996 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 23:48:38.558255 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 23:48:38.566546 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 23:48:38.696631 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fcae628f-5f9a-4539-a638-93fb1399b5d7 r/w with ordered data mode. Quota mode: none. Sep 10 23:48:38.698284 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 23:48:38.702447 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 23:48:38.712045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:48:38.726093 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 23:48:38.732516 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 23:48:38.732642 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 23:48:38.732698 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:48:38.758815 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 23:48:38.764561 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 23:48:38.778612 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Sep 10 23:48:38.778684 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:48:38.781214 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:48:38.788633 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 10 23:48:38.788695 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 10 23:48:38.791132 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:48:39.104358 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 23:48:39.115175 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Sep 10 23:48:39.125511 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 23:48:39.133617 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 23:48:39.204755 systemd-networkd[1148]: eth0: Gained IPv6LL Sep 10 23:48:39.361439 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 23:48:39.369215 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 23:48:39.374630 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 23:48:39.399913 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 23:48:39.403709 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:48:39.436320 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 23:48:39.445765 ignition[1314]: INFO : Ignition 2.21.0 Sep 10 23:48:39.447789 ignition[1314]: INFO : Stage: mount Sep 10 23:48:39.447789 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:39.447789 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:39.457568 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:39.460471 ignition[1314]: INFO : PUT result: OK Sep 10 23:48:39.465123 ignition[1314]: INFO : mount: mount passed Sep 10 23:48:39.470087 ignition[1314]: INFO : Ignition finished successfully Sep 10 23:48:39.472488 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 23:48:39.479748 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 23:48:39.701427 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:48:39.754639 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Sep 10 23:48:39.759141 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:48:39.759249 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:48:39.766463 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 10 23:48:39.766540 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 10 23:48:39.770059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:48:39.815381 ignition[1343]: INFO : Ignition 2.21.0 Sep 10 23:48:39.815381 ignition[1343]: INFO : Stage: files Sep 10 23:48:39.818990 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:39.818990 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:39.818990 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:39.827034 ignition[1343]: INFO : PUT result: OK Sep 10 23:48:39.832394 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Sep 10 23:48:39.835831 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 23:48:39.835831 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 23:48:39.844056 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 23:48:39.847331 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 23:48:39.850853 unknown[1343]: wrote ssh authorized keys file for user: core Sep 10 23:48:39.853602 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 23:48:39.861727 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 10 23:48:39.866047 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 10 23:48:39.981494 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 23:48:40.361464 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 10 23:48:40.365731 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 23:48:40.365731 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 23:48:40.373195 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:48:40.376881 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:48:40.380672 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:48:40.384570 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:48:40.388414 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:48:40.392335 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:48:40.401560 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:48:40.405480 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:48:40.409431 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:48:40.414918 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:48:40.414918 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:48:40.414918 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 10 23:48:40.933777 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 23:48:42.549170 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:48:42.554121 ignition[1343]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 23:48:42.556989 ignition[1343]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:48:42.563309 ignition[1343]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:48:42.563309 ignition[1343]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 23:48:42.563309 ignition[1343]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 10 23:48:42.573136 ignition[1343]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 23:48:42.576279 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:48:42.580125 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:48:42.583910 ignition[1343]: INFO : files: files passed Sep 10 23:48:42.583910 ignition[1343]: INFO : Ignition finished successfully Sep 10 23:48:42.592573 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 23:48:42.600796 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 23:48:42.603943 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 23:48:42.632796 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 23:48:42.633222 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 23:48:42.646500 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:48:42.646500 initrd-setup-root-after-ignition[1373]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:48:42.658086 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:48:42.663103 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:48:42.667133 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 23:48:42.672136 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 23:48:42.773820 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 23:48:42.775967 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 23:48:42.781970 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 23:48:42.784408 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 23:48:42.788859 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 23:48:42.790299 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 23:48:42.827109 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:48:42.832362 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 23:48:42.871027 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:48:42.876252 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:48:42.884292 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 23:48:42.888051 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 23:48:42.889975 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:48:42.896023 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 23:48:42.900717 systemd[1]: Stopped target basic.target - Basic System. Sep 10 23:48:42.903269 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 23:48:42.907704 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:48:42.912470 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 23:48:42.917232 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:48:42.921643 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 23:48:42.923959 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:48:42.928219 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 23:48:42.933373 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 23:48:42.939772 systemd[1]: Stopped target swap.target - Swaps. Sep 10 23:48:42.941937 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 23:48:42.942610 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:48:42.952656 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:48:42.957396 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:48:42.960595 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 23:48:42.964934 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:48:42.967987 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 23:48:42.968556 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 23:48:42.975740 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 23:48:42.976000 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:48:42.983140 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 23:48:42.983444 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 23:48:42.992254 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 23:48:43.000607 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 23:48:43.001483 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:48:43.020801 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 23:48:43.031290 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 23:48:43.035314 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:48:43.037472 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 23:48:43.037716 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:48:43.059264 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 23:48:43.059480 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 23:48:43.077968 ignition[1397]: INFO : Ignition 2.21.0 Sep 10 23:48:43.080919 ignition[1397]: INFO : Stage: umount Sep 10 23:48:43.080919 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:43.080919 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:43.080919 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:43.091308 ignition[1397]: INFO : PUT result: OK Sep 10 23:48:43.099807 ignition[1397]: INFO : umount: umount passed Sep 10 23:48:43.103372 ignition[1397]: INFO : Ignition finished successfully Sep 10 23:48:43.108248 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 23:48:43.109427 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 23:48:43.109654 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 23:48:43.121202 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 23:48:43.123489 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 23:48:43.126972 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 23:48:43.127146 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 23:48:43.131183 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 23:48:43.131293 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 23:48:43.134373 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 10 23:48:43.134458 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 10 23:48:43.137191 systemd[1]: Stopped target network.target - Network. Sep 10 23:48:43.140523 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 23:48:43.140634 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:48:43.143542 systemd[1]: Stopped target paths.target - Path Units. Sep 10 23:48:43.147117 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 23:48:43.154947 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:48:43.158431 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 23:48:43.160487 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 23:48:43.166719 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 23:48:43.166799 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:48:43.169286 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 23:48:43.169353 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:48:43.175521 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 23:48:43.175641 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 23:48:43.178033 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 23:48:43.178110 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 23:48:43.181014 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 23:48:43.181096 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 23:48:43.186025 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 23:48:43.188423 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 23:48:43.204707 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 23:48:43.210063 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 23:48:43.228568 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 23:48:43.229107 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 23:48:43.229332 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 23:48:43.236953 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 23:48:43.238672 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 23:48:43.245435 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 23:48:43.245513 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:48:43.257496 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 23:48:43.270289 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 23:48:43.270407 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:48:43.273903 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:48:43.273990 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:48:43.294744 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 23:48:43.294846 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 23:48:43.297924 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 23:48:43.298005 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:48:43.306855 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:48:43.314022 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:48:43.314169 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:48:43.336262 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 23:48:43.337941 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:48:43.345053 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 23:48:43.346082 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 23:48:43.350230 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 23:48:43.350310 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:48:43.357093 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 23:48:43.357229 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:48:43.363745 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 23:48:43.363849 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 23:48:43.373664 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:48:43.373787 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:48:43.382671 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 23:48:43.385353 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 23:48:43.385470 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:48:43.399502 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 23:48:43.399622 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:48:43.407514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:48:43.407641 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:48:43.416911 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 10 23:48:43.417022 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 10 23:48:43.417111 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:48:43.417958 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 23:48:43.425204 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 23:48:43.445753 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 23:48:43.447101 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 23:48:43.454448 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 23:48:43.462226 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 23:48:43.500060 systemd[1]: Switching root. Sep 10 23:48:43.546675 systemd-journald[257]: Journal stopped Sep 10 23:48:45.516658 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Sep 10 23:48:45.516778 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 23:48:45.516820 kernel: SELinux: policy capability open_perms=1 Sep 10 23:48:45.516850 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 23:48:45.516885 kernel: SELinux: policy capability always_check_network=0 Sep 10 23:48:45.516914 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 23:48:45.516943 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 23:48:45.516972 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 23:48:45.516997 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 23:48:45.517034 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 23:48:45.517069 kernel: audit: type=1403 audit(1757548123.809:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 23:48:45.517101 systemd[1]: Successfully loaded SELinux policy in 61.521ms. Sep 10 23:48:45.517151 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.775ms. Sep 10 23:48:45.517185 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:48:45.517216 systemd[1]: Detected virtualization amazon. Sep 10 23:48:45.517253 systemd[1]: Detected architecture arm64. Sep 10 23:48:45.517283 systemd[1]: Detected first boot. Sep 10 23:48:45.517314 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:48:45.517344 zram_generator::config[1440]: No configuration found. Sep 10 23:48:45.517373 kernel: NET: Registered PF_VSOCK protocol family Sep 10 23:48:45.517401 systemd[1]: Populated /etc with preset unit settings. Sep 10 23:48:45.517435 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 23:48:45.517466 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 23:48:45.517497 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 23:48:45.517528 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 23:48:45.517559 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 23:48:45.517610 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 23:48:45.517645 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 23:48:45.517677 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 23:48:45.517712 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 23:48:45.517741 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 23:48:45.517774 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 23:48:45.517802 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 23:48:45.517831 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:48:45.517877 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:48:45.517911 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 23:48:45.517939 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 23:48:45.517972 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 23:48:45.518006 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:48:45.518036 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 10 23:48:45.518064 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:48:45.518094 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:48:45.518122 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 23:48:45.518149 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 23:48:45.518177 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 23:48:45.518207 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 23:48:45.518240 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:48:45.518271 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:48:45.518300 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:48:45.518330 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:48:45.529658 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 23:48:45.529718 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 23:48:45.529748 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 23:48:45.529777 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:48:45.529806 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:48:45.529847 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:48:45.529897 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 23:48:45.529929 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 23:48:45.529957 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 23:48:45.529986 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 23:48:45.530015 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 23:48:45.530043 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 23:48:45.530071 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 23:48:45.530104 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 23:48:45.530139 systemd[1]: Reached target machines.target - Containers. Sep 10 23:48:45.530171 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 23:48:45.530200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:48:45.530229 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:48:45.530258 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 23:48:45.530286 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:48:45.530314 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:48:45.530342 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:48:45.530374 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 23:48:45.530403 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:48:45.530431 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 23:48:45.530461 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 23:48:45.530491 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 23:48:45.530521 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 23:48:45.530549 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 23:48:45.530597 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:48:45.530638 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:48:45.530669 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:48:45.530697 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:48:45.530725 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 23:48:45.530753 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 23:48:45.530786 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:48:45.530816 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 23:48:45.530843 systemd[1]: Stopped verity-setup.service. Sep 10 23:48:45.530875 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 23:48:45.530904 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 23:48:45.530937 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 23:48:45.530969 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 23:48:45.530997 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 23:48:45.531025 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 23:48:45.531053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:48:45.531081 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 23:48:45.531111 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 23:48:45.531137 kernel: loop: module loaded Sep 10 23:48:45.531165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:48:45.531192 kernel: fuse: init (API version 7.41) Sep 10 23:48:45.531223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:48:45.531254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:48:45.531284 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:48:45.531312 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 23:48:45.531342 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 23:48:45.531370 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:48:45.531398 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:48:45.531426 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 23:48:45.531459 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 23:48:45.531491 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:48:45.531521 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:48:45.531551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:48:45.538634 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:48:45.538716 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 23:48:45.538801 systemd-journald[1523]: Collecting audit messages is disabled. Sep 10 23:48:45.538861 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 23:48:45.538899 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 23:48:45.538931 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:48:45.538961 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 23:48:45.538992 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:48:45.539022 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 23:48:45.539054 systemd-journald[1523]: Journal started Sep 10 23:48:45.539102 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec21f416dce29f348abfd1d8d2725251) is 8M, max 75.3M, 67.3M free. Sep 10 23:48:44.866991 systemd[1]: Queued start job for default target multi-user.target. Sep 10 23:48:44.890173 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 10 23:48:44.891023 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 23:48:45.552237 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 23:48:45.558070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:48:45.567893 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 23:48:45.574620 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:48:45.585630 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 23:48:45.585719 kernel: ACPI: bus type drm_connector registered Sep 10 23:48:45.590603 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 23:48:45.595644 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:48:45.600192 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:48:45.603381 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:48:45.606438 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 23:48:45.661197 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 23:48:45.691453 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 23:48:45.699642 kernel: loop0: detected capacity change from 0 to 107312 Sep 10 23:48:45.701569 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 23:48:45.710405 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 23:48:45.715412 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:48:45.721499 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 23:48:45.730945 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 23:48:45.761607 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec21f416dce29f348abfd1d8d2725251 is 113.536ms for 937 entries. Sep 10 23:48:45.761607 systemd-journald[1523]: System Journal (/var/log/journal/ec21f416dce29f348abfd1d8d2725251) is 8M, max 195.6M, 187.6M free. Sep 10 23:48:45.884859 systemd-journald[1523]: Received client request to flush runtime journal. Sep 10 23:48:45.884929 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 23:48:45.885163 kernel: loop1: detected capacity change from 0 to 61240 Sep 10 23:48:45.885205 kernel: loop2: detected capacity change from 0 to 207008 Sep 10 23:48:45.844893 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 23:48:45.891125 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 23:48:45.895813 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 23:48:45.917713 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 23:48:45.925639 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:48:45.999156 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:48:46.024189 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Sep 10 23:48:46.024229 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Sep 10 23:48:46.039057 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:48:46.152930 kernel: loop3: detected capacity change from 0 to 138376 Sep 10 23:48:46.218645 kernel: loop4: detected capacity change from 0 to 107312 Sep 10 23:48:46.250848 kernel: loop5: detected capacity change from 0 to 61240 Sep 10 23:48:46.276624 kernel: loop6: detected capacity change from 0 to 207008 Sep 10 23:48:46.319619 kernel: loop7: detected capacity change from 0 to 138376 Sep 10 23:48:46.355075 (sd-merge)[1599]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 10 23:48:46.357673 (sd-merge)[1599]: Merged extensions into '/usr'. Sep 10 23:48:46.369136 systemd[1]: Reload requested from client PID 1552 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 23:48:46.369303 systemd[1]: Reloading... Sep 10 23:48:46.499841 ldconfig[1546]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 23:48:46.555615 zram_generator::config[1624]: No configuration found. Sep 10 23:48:46.784243 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:48:46.975358 systemd[1]: Reloading finished in 604 ms. Sep 10 23:48:46.996642 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 23:48:47.001636 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 23:48:47.004835 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 23:48:47.018826 systemd[1]: Starting ensure-sysext.service... Sep 10 23:48:47.028189 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:48:47.036469 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:48:47.072081 systemd[1]: Reload requested from client PID 1678 ('systemctl') (unit ensure-sysext.service)... Sep 10 23:48:47.072111 systemd[1]: Reloading... Sep 10 23:48:47.121248 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 23:48:47.121325 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 23:48:47.121990 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 23:48:47.122488 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 23:48:47.124212 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 23:48:47.124827 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Sep 10 23:48:47.124994 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Sep 10 23:48:47.137818 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:48:47.137862 systemd-tmpfiles[1679]: Skipping /boot Sep 10 23:48:47.173573 systemd-udevd[1680]: Using default interface naming scheme 'v255'. Sep 10 23:48:47.173683 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:48:47.173696 systemd-tmpfiles[1679]: Skipping /boot Sep 10 23:48:47.292629 zram_generator::config[1710]: No configuration found. Sep 10 23:48:47.545460 (udev-worker)[1721]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:48:47.679082 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:48:47.899483 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 10 23:48:47.899744 systemd[1]: Reloading finished in 826 ms. Sep 10 23:48:47.959707 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:48:47.970629 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:48:48.650241 systemd[1]: Finished ensure-sysext.service. Sep 10 23:48:48.709025 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 10 23:48:48.716745 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:48:48.731845 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 23:48:48.738024 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:48:48.740063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:48:48.750888 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:48:48.760333 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:48:48.772034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:48:48.778616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:48:48.782136 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 23:48:48.794455 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:48:48.799031 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 23:48:48.815017 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:48:48.833054 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:48:48.850102 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 23:48:48.871978 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 23:48:48.880026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:48:48.889545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:48:48.891103 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:48:48.898170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:48:48.900824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:48:48.930555 augenrules[1925]: No rules Sep 10 23:48:48.932913 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:48:48.934508 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:48:48.941624 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 23:48:48.949852 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:48:48.957877 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:48:48.966742 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 23:48:48.970370 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:48:48.972884 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:48:48.987502 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:48:48.987703 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:48:48.995527 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 23:48:49.006753 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 23:48:49.016732 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 23:48:49.029900 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 23:48:49.041688 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 23:48:49.042961 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 23:48:49.126679 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:48:49.130069 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 23:48:49.257839 systemd-networkd[1914]: lo: Link UP Sep 10 23:48:49.258402 systemd-networkd[1914]: lo: Gained carrier Sep 10 23:48:49.261635 systemd-networkd[1914]: Enumeration completed Sep 10 23:48:49.261814 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:48:49.262759 systemd-resolved[1918]: Positive Trust Anchors: Sep 10 23:48:49.262793 systemd-resolved[1918]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:48:49.262857 systemd-resolved[1918]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:48:49.263172 systemd-networkd[1914]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:48:49.263180 systemd-networkd[1914]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:48:49.271826 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 23:48:49.277991 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 23:48:49.287159 systemd-networkd[1914]: eth0: Link UP Sep 10 23:48:49.287436 systemd-networkd[1914]: eth0: Gained carrier Sep 10 23:48:49.287473 systemd-networkd[1914]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:48:49.291972 systemd-resolved[1918]: Defaulting to hostname 'linux'. Sep 10 23:48:49.295513 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:48:49.298129 systemd[1]: Reached target network.target - Network. Sep 10 23:48:49.300302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:48:49.302873 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:48:49.305277 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 23:48:49.308250 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 23:48:49.311290 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 23:48:49.314804 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 23:48:49.317647 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 23:48:49.320389 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 23:48:49.320610 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:48:49.322658 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:48:49.325699 systemd-networkd[1914]: eth0: DHCPv4 address 172.31.28.68/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 10 23:48:49.326664 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 23:48:49.333972 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 23:48:49.344045 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 23:48:49.347842 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 23:48:49.350898 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 23:48:49.362934 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 23:48:49.366384 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 23:48:49.372657 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 23:48:49.375995 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 23:48:49.379576 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:48:49.382090 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:48:49.384454 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:48:49.384528 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:48:49.388756 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 23:48:49.394206 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 10 23:48:49.399908 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 23:48:49.412459 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 23:48:49.421948 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 23:48:49.432059 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 23:48:49.436454 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 23:48:49.439930 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 23:48:49.459410 systemd[1]: Started ntpd.service - Network Time Service. Sep 10 23:48:49.473885 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 23:48:49.481214 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 10 23:48:49.490042 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 23:48:49.497783 jq[1965]: false Sep 10 23:48:49.501096 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 23:48:49.510207 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 23:48:49.515317 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 23:48:49.521374 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 23:48:49.524986 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 23:48:49.541640 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 23:48:49.554693 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 23:48:49.558431 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 23:48:49.559993 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 23:48:49.588949 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 23:48:49.590454 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 23:48:49.631803 extend-filesystems[1966]: Found /dev/nvme0n1p6 Sep 10 23:48:49.636341 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 23:48:49.646133 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 23:48:49.678432 jq[1981]: true Sep 10 23:48:49.682622 extend-filesystems[1966]: Found /dev/nvme0n1p9 Sep 10 23:48:49.698633 tar[1988]: linux-arm64/LICENSE Sep 10 23:48:49.698633 tar[1988]: linux-arm64/helm Sep 10 23:48:49.702458 extend-filesystems[1966]: Checking size of /dev/nvme0n1p9 Sep 10 23:48:49.733004 (ntainerd)[2003]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 23:48:49.744169 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 23:48:49.747769 update_engine[1978]: I20250910 23:48:49.740967 1978 main.cc:92] Flatcar Update Engine starting Sep 10 23:48:49.743892 dbus-daemon[1963]: [system] SELinux support is enabled Sep 10 23:48:49.755028 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 23:48:49.755081 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 23:48:49.757933 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 23:48:49.757961 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 23:48:49.803209 jq[2008]: true Sep 10 23:48:49.818636 extend-filesystems[1966]: Resized partition /dev/nvme0n1p9 Sep 10 23:48:49.822389 dbus-daemon[1963]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1914 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 10 23:48:49.839608 extend-filesystems[2019]: resize2fs 1.47.2 (1-Jan-2025) Sep 10 23:48:49.834615 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 10 23:48:49.842561 systemd[1]: Started update-engine.service - Update Engine. Sep 10 23:48:49.846067 update_engine[1978]: I20250910 23:48:49.843377 1978 update_check_scheduler.cc:74] Next update check in 9m38s Sep 10 23:48:49.848240 ntpd[1968]: ntpd 4.2.8p17@1.4004-o Wed Sep 10 21:39:18 UTC 2025 (1): Starting Sep 10 23:48:49.850696 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: ntpd 4.2.8p17@1.4004-o Wed Sep 10 21:39:18 UTC 2025 (1): Starting Sep 10 23:48:49.850696 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 10 23:48:49.850696 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: ---------------------------------------------------- Sep 10 23:48:49.850696 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: ntp-4 is maintained by Network Time Foundation, Sep 10 23:48:49.850696 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 10 23:48:49.850696 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: corporation. Support and training for ntp-4 are Sep 10 23:48:49.850696 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: available at https://www.nwtime.org/support Sep 10 23:48:49.850696 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: ---------------------------------------------------- Sep 10 23:48:49.848304 ntpd[1968]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 10 23:48:49.848322 ntpd[1968]: ---------------------------------------------------- Sep 10 23:48:49.859752 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: proto: precision = 0.096 usec (-23) Sep 10 23:48:49.859752 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: basedate set to 2025-08-29 Sep 10 23:48:49.859752 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: gps base set to 2025-08-31 (week 2382) Sep 10 23:48:49.859752 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: Listen and drop on 0 v6wildcard [::]:123 Sep 10 23:48:49.859752 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 10 23:48:49.859752 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: Listen normally on 2 lo 127.0.0.1:123 Sep 10 23:48:49.859752 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: Listen normally on 3 eth0 172.31.28.68:123 Sep 10 23:48:49.859752 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: Listen normally on 4 lo [::1]:123 Sep 10 23:48:49.859752 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: bind(21) AF_INET6 fe80::42e:7cff:feda:9443%2#123 flags 0x11 failed: Cannot assign requested address Sep 10 23:48:49.859752 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: unable to create socket on eth0 (5) for fe80::42e:7cff:feda:9443%2#123 Sep 10 23:48:49.859752 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: failed to init interface for address fe80::42e:7cff:feda:9443%2 Sep 10 23:48:49.859752 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: Listening on routing socket on fd #21 for interface updates Sep 10 23:48:49.848339 ntpd[1968]: ntp-4 is maintained by Network Time Foundation, Sep 10 23:48:49.848355 ntpd[1968]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 10 23:48:49.848370 ntpd[1968]: corporation. Support and training for ntp-4 are Sep 10 23:48:49.848385 ntpd[1968]: available at https://www.nwtime.org/support Sep 10 23:48:49.848401 ntpd[1968]: ---------------------------------------------------- Sep 10 23:48:49.855042 ntpd[1968]: proto: precision = 0.096 usec (-23) Sep 10 23:48:49.855971 ntpd[1968]: basedate set to 2025-08-29 Sep 10 23:48:49.856003 ntpd[1968]: gps base set to 2025-08-31 (week 2382) Sep 10 23:48:49.880838 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 10 23:48:49.880838 ntpd[1968]: 10 Sep 23:48:49 ntpd[1968]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 10 23:48:49.858537 ntpd[1968]: Listen and drop on 0 v6wildcard [::]:123 Sep 10 23:48:49.858646 ntpd[1968]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 10 23:48:49.858912 ntpd[1968]: Listen normally on 2 lo 127.0.0.1:123 Sep 10 23:48:49.858974 ntpd[1968]: Listen normally on 3 eth0 172.31.28.68:123 Sep 10 23:48:49.859036 ntpd[1968]: Listen normally on 4 lo [::1]:123 Sep 10 23:48:49.859105 ntpd[1968]: bind(21) AF_INET6 fe80::42e:7cff:feda:9443%2#123 flags 0x11 failed: Cannot assign requested address Sep 10 23:48:49.859140 ntpd[1968]: unable to create socket on eth0 (5) for fe80::42e:7cff:feda:9443%2#123 Sep 10 23:48:49.859164 ntpd[1968]: failed to init interface for address fe80::42e:7cff:feda:9443%2 Sep 10 23:48:49.859210 ntpd[1968]: Listening on routing socket on fd #21 for interface updates Sep 10 23:48:49.867059 ntpd[1968]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 10 23:48:49.867104 ntpd[1968]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 10 23:48:49.884612 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 10 23:48:49.893842 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 23:48:49.897111 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 10 23:48:49.914691 coreos-metadata[1962]: Sep 10 23:48:49.907 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 10 23:48:49.914691 coreos-metadata[1962]: Sep 10 23:48:49.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 10 23:48:49.914691 coreos-metadata[1962]: Sep 10 23:48:49.911 INFO Fetch successful Sep 10 23:48:49.914691 coreos-metadata[1962]: Sep 10 23:48:49.911 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 10 23:48:49.914691 coreos-metadata[1962]: Sep 10 23:48:49.911 INFO Fetch successful Sep 10 23:48:49.914691 coreos-metadata[1962]: Sep 10 23:48:49.912 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 10 23:48:49.914691 coreos-metadata[1962]: Sep 10 23:48:49.912 INFO Fetch successful Sep 10 23:48:49.914691 coreos-metadata[1962]: Sep 10 23:48:49.912 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 10 23:48:49.914691 coreos-metadata[1962]: Sep 10 23:48:49.913 INFO Fetch successful Sep 10 23:48:49.914691 coreos-metadata[1962]: Sep 10 23:48:49.913 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 10 23:48:49.914691 coreos-metadata[1962]: Sep 10 23:48:49.914 INFO Fetch failed with 404: resource not found Sep 10 23:48:49.915806 coreos-metadata[1962]: Sep 10 23:48:49.914 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 10 23:48:49.918434 coreos-metadata[1962]: Sep 10 23:48:49.916 INFO Fetch successful Sep 10 23:48:49.918434 coreos-metadata[1962]: Sep 10 23:48:49.916 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 10 23:48:49.918434 coreos-metadata[1962]: Sep 10 23:48:49.917 INFO Fetch successful Sep 10 23:48:49.918434 coreos-metadata[1962]: Sep 10 23:48:49.917 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 10 23:48:49.918434 coreos-metadata[1962]: Sep 10 23:48:49.917 INFO Fetch successful Sep 10 23:48:49.918434 coreos-metadata[1962]: Sep 10 23:48:49.917 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 10 23:48:49.920795 coreos-metadata[1962]: Sep 10 23:48:49.920 INFO Fetch successful Sep 10 23:48:49.920885 coreos-metadata[1962]: Sep 10 23:48:49.920 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 10 23:48:49.923665 coreos-metadata[1962]: Sep 10 23:48:49.921 INFO Fetch successful Sep 10 23:48:49.996641 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 10 23:48:50.012765 extend-filesystems[2019]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 10 23:48:50.012765 extend-filesystems[2019]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 23:48:50.012765 extend-filesystems[2019]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 10 23:48:50.027877 extend-filesystems[1966]: Resized filesystem in /dev/nvme0n1p9 Sep 10 23:48:50.032291 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 23:48:50.035555 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 23:48:50.088083 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 23:48:50.097709 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 10 23:48:50.101969 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 23:48:50.102413 bash[2052]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:48:50.113019 systemd-logind[1975]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 23:48:50.113062 systemd-logind[1975]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 10 23:48:50.114101 systemd-logind[1975]: New seat seat0. Sep 10 23:48:50.116144 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 23:48:50.123721 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 23:48:50.135885 systemd[1]: Starting sshkeys.service... Sep 10 23:48:50.329700 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 10 23:48:50.344031 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 10 23:48:50.640263 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 10 23:48:50.648744 dbus-daemon[1963]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 10 23:48:50.649744 dbus-daemon[1963]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2020 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 10 23:48:50.662674 systemd[1]: Starting polkit.service - Authorization Manager... Sep 10 23:48:50.707912 containerd[2003]: time="2025-09-10T23:48:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 23:48:50.713096 coreos-metadata[2090]: Sep 10 23:48:50.711 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 10 23:48:50.714781 coreos-metadata[2090]: Sep 10 23:48:50.713 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 10 23:48:50.717398 coreos-metadata[2090]: Sep 10 23:48:50.716 INFO Fetch successful Sep 10 23:48:50.717398 coreos-metadata[2090]: Sep 10 23:48:50.716 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 10 23:48:50.723652 coreos-metadata[2090]: Sep 10 23:48:50.722 INFO Fetch successful Sep 10 23:48:50.724122 containerd[2003]: time="2025-09-10T23:48:50.723886849Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 10 23:48:50.728987 unknown[2090]: wrote ssh authorized keys file for user: core Sep 10 23:48:50.792818 update-ssh-keys[2151]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:48:50.795547 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 10 23:48:50.818656 systemd[1]: Finished sshkeys.service. Sep 10 23:48:50.855391 ntpd[1968]: bind(24) AF_INET6 fe80::42e:7cff:feda:9443%2#123 flags 0x11 failed: Cannot assign requested address Sep 10 23:48:50.856061 ntpd[1968]: 10 Sep 23:48:50 ntpd[1968]: bind(24) AF_INET6 fe80::42e:7cff:feda:9443%2#123 flags 0x11 failed: Cannot assign requested address Sep 10 23:48:50.856061 ntpd[1968]: 10 Sep 23:48:50 ntpd[1968]: unable to create socket on eth0 (6) for fe80::42e:7cff:feda:9443%2#123 Sep 10 23:48:50.856061 ntpd[1968]: 10 Sep 23:48:50 ntpd[1968]: failed to init interface for address fe80::42e:7cff:feda:9443%2 Sep 10 23:48:50.855449 ntpd[1968]: unable to create socket on eth0 (6) for fe80::42e:7cff:feda:9443%2#123 Sep 10 23:48:50.855481 ntpd[1968]: failed to init interface for address fe80::42e:7cff:feda:9443%2 Sep 10 23:48:50.866474 containerd[2003]: time="2025-09-10T23:48:50.857304698Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.06µs" Sep 10 23:48:50.866474 containerd[2003]: time="2025-09-10T23:48:50.865361870Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 23:48:50.866474 containerd[2003]: time="2025-09-10T23:48:50.865423766Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 23:48:50.866474 containerd[2003]: time="2025-09-10T23:48:50.865743542Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 23:48:50.866474 containerd[2003]: time="2025-09-10T23:48:50.865778954Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 23:48:50.866474 containerd[2003]: time="2025-09-10T23:48:50.865847978Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:48:50.866474 containerd[2003]: time="2025-09-10T23:48:50.865974914Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:48:50.866474 containerd[2003]: time="2025-09-10T23:48:50.866000462Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:48:50.866474 containerd[2003]: time="2025-09-10T23:48:50.866362790Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:48:50.866474 containerd[2003]: time="2025-09-10T23:48:50.866402402Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:48:50.866474 containerd[2003]: time="2025-09-10T23:48:50.866438366Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:48:50.866474 containerd[2003]: time="2025-09-10T23:48:50.866461562Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 23:48:50.867046 containerd[2003]: time="2025-09-10T23:48:50.866670098Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 23:48:50.867318 containerd[2003]: time="2025-09-10T23:48:50.867127694Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:48:50.867318 containerd[2003]: time="2025-09-10T23:48:50.867205838Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:48:50.867318 containerd[2003]: time="2025-09-10T23:48:50.867232430Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 23:48:50.867318 containerd[2003]: time="2025-09-10T23:48:50.867308582Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 23:48:50.883728 locksmithd[2023]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 23:48:50.884905 containerd[2003]: time="2025-09-10T23:48:50.884830910Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 23:48:50.885063 containerd[2003]: time="2025-09-10T23:48:50.885017462Z" level=info msg="metadata content store policy set" policy=shared Sep 10 23:48:50.895510 containerd[2003]: time="2025-09-10T23:48:50.895385798Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 23:48:50.895510 containerd[2003]: time="2025-09-10T23:48:50.895472642Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.895511738Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.895545146Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.895614854Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.895654886Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.895684178Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.895712582Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.895743026Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.895770062Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.895795298Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.895826726Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.896051318Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.896088122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.896123966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 23:48:50.896742 containerd[2003]: time="2025-09-10T23:48:50.896151374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 23:48:50.897259 containerd[2003]: time="2025-09-10T23:48:50.896178338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 23:48:50.897259 containerd[2003]: time="2025-09-10T23:48:50.896206982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 23:48:50.897259 containerd[2003]: time="2025-09-10T23:48:50.896235794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 23:48:50.897259 containerd[2003]: time="2025-09-10T23:48:50.896261342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 23:48:50.897259 containerd[2003]: time="2025-09-10T23:48:50.896288138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 23:48:50.897259 containerd[2003]: time="2025-09-10T23:48:50.896314490Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 23:48:50.897259 containerd[2003]: time="2025-09-10T23:48:50.896349818Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 23:48:50.897259 containerd[2003]: time="2025-09-10T23:48:50.896741594Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 23:48:50.897259 containerd[2003]: time="2025-09-10T23:48:50.896779610Z" level=info msg="Start snapshots syncer" Sep 10 23:48:50.897259 containerd[2003]: time="2025-09-10T23:48:50.896823722Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 23:48:50.899988 containerd[2003]: time="2025-09-10T23:48:50.897198494Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 23:48:50.899988 containerd[2003]: time="2025-09-10T23:48:50.897284162Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 23:48:50.905375 containerd[2003]: time="2025-09-10T23:48:50.905304638Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 23:48:50.909624 containerd[2003]: time="2025-09-10T23:48:50.906944870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 23:48:50.909624 containerd[2003]: time="2025-09-10T23:48:50.907069634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 23:48:50.909624 containerd[2003]: time="2025-09-10T23:48:50.907165730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 23:48:50.909624 containerd[2003]: time="2025-09-10T23:48:50.907208354Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 23:48:50.909624 containerd[2003]: time="2025-09-10T23:48:50.907238030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 23:48:50.909624 containerd[2003]: time="2025-09-10T23:48:50.907265714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 23:48:50.909624 containerd[2003]: time="2025-09-10T23:48:50.907292966Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 23:48:50.911648 containerd[2003]: time="2025-09-10T23:48:50.911166350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 23:48:50.911648 containerd[2003]: time="2025-09-10T23:48:50.911224466Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 23:48:50.911648 containerd[2003]: time="2025-09-10T23:48:50.911269502Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 23:48:50.911648 containerd[2003]: time="2025-09-10T23:48:50.911370122Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:48:50.911648 containerd[2003]: time="2025-09-10T23:48:50.911406278Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:48:50.911648 containerd[2003]: time="2025-09-10T23:48:50.911428730Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:48:50.911648 containerd[2003]: time="2025-09-10T23:48:50.911459270Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:48:50.911648 containerd[2003]: time="2025-09-10T23:48:50.911481242Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 23:48:50.911648 containerd[2003]: time="2025-09-10T23:48:50.911506622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 23:48:50.911648 containerd[2003]: time="2025-09-10T23:48:50.911532554Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 23:48:50.916969 containerd[2003]: time="2025-09-10T23:48:50.911715506Z" level=info msg="runtime interface created" Sep 10 23:48:50.916969 containerd[2003]: time="2025-09-10T23:48:50.911733266Z" level=info msg="created NRI interface" Sep 10 23:48:50.916969 containerd[2003]: time="2025-09-10T23:48:50.911755250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 23:48:50.916969 containerd[2003]: time="2025-09-10T23:48:50.911785754Z" level=info msg="Connect containerd service" Sep 10 23:48:50.916969 containerd[2003]: time="2025-09-10T23:48:50.911866886Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 23:48:50.918350 containerd[2003]: time="2025-09-10T23:48:50.917282390Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:48:51.044766 systemd-networkd[1914]: eth0: Gained IPv6LL Sep 10 23:48:51.057329 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 23:48:51.061059 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 23:48:51.067095 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 10 23:48:51.078976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:48:51.090536 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 23:48:51.225275 polkitd[2146]: Started polkitd version 126 Sep 10 23:48:51.233973 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 23:48:51.274002 polkitd[2146]: Loading rules from directory /etc/polkit-1/rules.d Sep 10 23:48:51.274659 polkitd[2146]: Loading rules from directory /run/polkit-1/rules.d Sep 10 23:48:51.276418 polkitd[2146]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 10 23:48:51.279395 polkitd[2146]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 10 23:48:51.279674 polkitd[2146]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 10 23:48:51.279889 polkitd[2146]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 10 23:48:51.283042 polkitd[2146]: Finished loading, compiling and executing 2 rules Sep 10 23:48:51.283931 systemd[1]: Started polkit.service - Authorization Manager. Sep 10 23:48:51.292973 dbus-daemon[1963]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 10 23:48:51.293986 polkitd[2146]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 10 23:48:51.348286 amazon-ssm-agent[2173]: Initializing new seelog logger Sep 10 23:48:51.350767 amazon-ssm-agent[2173]: New Seelog Logger Creation Complete Sep 10 23:48:51.350767 amazon-ssm-agent[2173]: 2025/09/10 23:48:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:51.350767 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:51.350767 amazon-ssm-agent[2173]: 2025/09/10 23:48:51 processing appconfig overrides Sep 10 23:48:51.353123 amazon-ssm-agent[2173]: 2025/09/10 23:48:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:51.353123 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:51.353293 amazon-ssm-agent[2173]: 2025/09/10 23:48:51 processing appconfig overrides Sep 10 23:48:51.353477 amazon-ssm-agent[2173]: 2025/09/10 23:48:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:51.353477 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:51.353611 amazon-ssm-agent[2173]: 2025/09/10 23:48:51 processing appconfig overrides Sep 10 23:48:51.359516 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.3529 INFO Proxy environment variables: Sep 10 23:48:51.362421 amazon-ssm-agent[2173]: 2025/09/10 23:48:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:51.362421 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:51.362643 amazon-ssm-agent[2173]: 2025/09/10 23:48:51 processing appconfig overrides Sep 10 23:48:51.363514 systemd-hostnamed[2020]: Hostname set to (transient) Sep 10 23:48:51.365299 systemd-resolved[1918]: System hostname changed to 'ip-172-31-28-68'. Sep 10 23:48:51.367625 containerd[2003]: time="2025-09-10T23:48:51.367445664Z" level=info msg="Start subscribing containerd event" Sep 10 23:48:51.367625 containerd[2003]: time="2025-09-10T23:48:51.367549284Z" level=info msg="Start recovering state" Sep 10 23:48:51.370073 containerd[2003]: time="2025-09-10T23:48:51.367753404Z" level=info msg="Start event monitor" Sep 10 23:48:51.370177 containerd[2003]: time="2025-09-10T23:48:51.370082400Z" level=info msg="Start cni network conf syncer for default" Sep 10 23:48:51.370177 containerd[2003]: time="2025-09-10T23:48:51.370114392Z" level=info msg="Start streaming server" Sep 10 23:48:51.370177 containerd[2003]: time="2025-09-10T23:48:51.370134384Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 23:48:51.370177 containerd[2003]: time="2025-09-10T23:48:51.370155048Z" level=info msg="runtime interface starting up..." Sep 10 23:48:51.370177 containerd[2003]: time="2025-09-10T23:48:51.370170144Z" level=info msg="starting plugins..." Sep 10 23:48:51.370398 containerd[2003]: time="2025-09-10T23:48:51.370204992Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 23:48:51.370398 containerd[2003]: time="2025-09-10T23:48:51.370022436Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 23:48:51.370476 containerd[2003]: time="2025-09-10T23:48:51.370444176Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 23:48:51.375235 containerd[2003]: time="2025-09-10T23:48:51.370550664Z" level=info msg="containerd successfully booted in 0.667255s" Sep 10 23:48:51.371508 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 23:48:51.457611 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.3530 INFO https_proxy: Sep 10 23:48:51.460708 sshd_keygen[1998]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 23:48:51.518427 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 23:48:51.526180 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 23:48:51.533164 systemd[1]: Started sshd@0-172.31.28.68:22-139.178.68.195:41978.service - OpenSSH per-connection server daemon (139.178.68.195:41978). Sep 10 23:48:51.560887 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.3530 INFO http_proxy: Sep 10 23:48:51.592540 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 23:48:51.594489 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 23:48:51.603780 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 23:48:51.661684 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.3530 INFO no_proxy: Sep 10 23:48:51.677202 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 23:48:51.685100 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 23:48:51.693698 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 10 23:48:51.696619 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 23:48:51.759900 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.3532 INFO Checking if agent identity type OnPrem can be assumed Sep 10 23:48:51.845699 sshd[2213]: Accepted publickey for core from 139.178.68.195 port 41978 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:48:51.852704 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:51.859484 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.3533 INFO Checking if agent identity type EC2 can be assumed Sep 10 23:48:51.872256 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 23:48:51.877427 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 23:48:51.908909 systemd-logind[1975]: New session 1 of user core. Sep 10 23:48:51.927728 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 23:48:51.942253 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 23:48:51.959748 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.5733 INFO Agent will take identity from EC2 Sep 10 23:48:51.967736 (systemd)[2226]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 23:48:51.976661 systemd-logind[1975]: New session c1 of user core. Sep 10 23:48:52.003623 tar[1988]: linux-arm64/README.md Sep 10 23:48:52.039503 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 23:48:52.061623 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.5810 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 10 23:48:52.158747 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.5810 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 10 23:48:52.259693 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.5810 INFO [amazon-ssm-agent] Starting Core Agent Sep 10 23:48:52.329751 systemd[2226]: Queued start job for default target default.target. Sep 10 23:48:52.335475 systemd[2226]: Created slice app.slice - User Application Slice. Sep 10 23:48:52.335535 systemd[2226]: Reached target paths.target - Paths. Sep 10 23:48:52.336207 systemd[2226]: Reached target timers.target - Timers. Sep 10 23:48:52.338966 systemd[2226]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 23:48:52.360697 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.5810 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 10 23:48:52.366960 systemd[2226]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 23:48:52.367084 systemd[2226]: Reached target sockets.target - Sockets. Sep 10 23:48:52.367165 systemd[2226]: Reached target basic.target - Basic System. Sep 10 23:48:52.367243 systemd[2226]: Reached target default.target - Main User Target. Sep 10 23:48:52.367301 systemd[2226]: Startup finished in 371ms. Sep 10 23:48:52.367893 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 23:48:52.380025 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 23:48:52.460298 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.5810 INFO [Registrar] Starting registrar module Sep 10 23:48:52.549335 systemd[1]: Started sshd@1-172.31.28.68:22-139.178.68.195:33176.service - OpenSSH per-connection server daemon (139.178.68.195:33176). Sep 10 23:48:52.561498 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.5851 INFO [EC2Identity] Checking disk for registration info Sep 10 23:48:52.660679 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.5852 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 10 23:48:52.761050 amazon-ssm-agent[2173]: 2025-09-10 23:48:51.5852 INFO [EC2Identity] Generating registration keypair Sep 10 23:48:52.772223 sshd[2240]: Accepted publickey for core from 139.178.68.195 port 33176 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:48:52.774362 sshd-session[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:52.788158 systemd-logind[1975]: New session 2 of user core. Sep 10 23:48:52.792876 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 23:48:52.922615 sshd[2242]: Connection closed by 139.178.68.195 port 33176 Sep 10 23:48:52.922817 sshd-session[2240]: pam_unix(sshd:session): session closed for user core Sep 10 23:48:52.930073 systemd[1]: sshd@1-172.31.28.68:22-139.178.68.195:33176.service: Deactivated successfully. Sep 10 23:48:52.937951 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 23:48:52.943490 systemd-logind[1975]: Session 2 logged out. Waiting for processes to exit. Sep 10 23:48:52.965213 systemd[1]: Started sshd@2-172.31.28.68:22-139.178.68.195:33190.service - OpenSSH per-connection server daemon (139.178.68.195:33190). Sep 10 23:48:52.973350 systemd-logind[1975]: Removed session 2. Sep 10 23:48:53.190434 sshd[2248]: Accepted publickey for core from 139.178.68.195 port 33190 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:48:53.193432 sshd-session[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:53.209863 systemd-logind[1975]: New session 3 of user core. Sep 10 23:48:53.213880 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 23:48:53.295603 amazon-ssm-agent[2173]: 2025-09-10 23:48:53.2953 INFO [EC2Identity] Checking write access before registering Sep 10 23:48:53.339331 amazon-ssm-agent[2173]: 2025/09/10 23:48:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:53.339331 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:53.339331 amazon-ssm-agent[2173]: 2025/09/10 23:48:53 processing appconfig overrides Sep 10 23:48:53.347629 sshd[2250]: Connection closed by 139.178.68.195 port 33190 Sep 10 23:48:53.344977 sshd-session[2248]: pam_unix(sshd:session): session closed for user core Sep 10 23:48:53.355505 systemd[1]: sshd@2-172.31.28.68:22-139.178.68.195:33190.service: Deactivated successfully. Sep 10 23:48:53.360157 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 23:48:53.362293 systemd-logind[1975]: Session 3 logged out. Waiting for processes to exit. Sep 10 23:48:53.365675 systemd-logind[1975]: Removed session 3. Sep 10 23:48:53.370611 amazon-ssm-agent[2173]: 2025-09-10 23:48:53.2962 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 10 23:48:53.370611 amazon-ssm-agent[2173]: 2025-09-10 23:48:53.3382 INFO [EC2Identity] EC2 registration was successful. Sep 10 23:48:53.370611 amazon-ssm-agent[2173]: 2025-09-10 23:48:53.3382 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 10 23:48:53.370611 amazon-ssm-agent[2173]: 2025-09-10 23:48:53.3383 INFO [CredentialRefresher] credentialRefresher has started Sep 10 23:48:53.370611 amazon-ssm-agent[2173]: 2025-09-10 23:48:53.3383 INFO [CredentialRefresher] Starting credentials refresher loop Sep 10 23:48:53.370611 amazon-ssm-agent[2173]: 2025-09-10 23:48:53.3699 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 10 23:48:53.370611 amazon-ssm-agent[2173]: 2025-09-10 23:48:53.3702 INFO [CredentialRefresher] Credentials ready Sep 10 23:48:53.396180 amazon-ssm-agent[2173]: 2025-09-10 23:48:53.3704 INFO [CredentialRefresher] Next credential rotation will be in 29.9999921012 minutes Sep 10 23:48:53.849425 ntpd[1968]: Listen normally on 7 eth0 [fe80::42e:7cff:feda:9443%2]:123 Sep 10 23:48:53.850437 ntpd[1968]: 10 Sep 23:48:53 ntpd[1968]: Listen normally on 7 eth0 [fe80::42e:7cff:feda:9443%2]:123 Sep 10 23:48:54.212061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:48:54.216097 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 23:48:54.222335 systemd[1]: Startup finished in 3.661s (kernel) + 10.130s (initrd) + 10.472s (userspace) = 24.264s. Sep 10 23:48:54.232240 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:48:54.397746 amazon-ssm-agent[2173]: 2025-09-10 23:48:54.3976 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 10 23:48:54.550544 amazon-ssm-agent[2173]: 2025-09-10 23:48:54.4036 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2267) started Sep 10 23:48:54.651152 amazon-ssm-agent[2173]: 2025-09-10 23:48:54.4037 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 10 23:48:55.857999 kubelet[2260]: E0910 23:48:55.857906 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:48:55.862768 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:48:55.863105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:48:55.864018 systemd[1]: kubelet.service: Consumed 1.411s CPU time, 255.6M memory peak. Sep 10 23:48:56.421472 systemd-resolved[1918]: Clock change detected. Flushing caches. Sep 10 23:49:02.956105 systemd[1]: Started sshd@3-172.31.28.68:22-139.178.68.195:59228.service - OpenSSH per-connection server daemon (139.178.68.195:59228). Sep 10 23:49:03.145467 sshd[2285]: Accepted publickey for core from 139.178.68.195 port 59228 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:49:03.147893 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:49:03.155799 systemd-logind[1975]: New session 4 of user core. Sep 10 23:49:03.166481 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 23:49:03.289788 sshd[2287]: Connection closed by 139.178.68.195 port 59228 Sep 10 23:49:03.288885 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Sep 10 23:49:03.295807 systemd-logind[1975]: Session 4 logged out. Waiting for processes to exit. Sep 10 23:49:03.297128 systemd[1]: sshd@3-172.31.28.68:22-139.178.68.195:59228.service: Deactivated successfully. Sep 10 23:49:03.300211 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 23:49:03.304533 systemd-logind[1975]: Removed session 4. Sep 10 23:49:03.322592 systemd[1]: Started sshd@4-172.31.28.68:22-139.178.68.195:59232.service - OpenSSH per-connection server daemon (139.178.68.195:59232). Sep 10 23:49:03.518397 sshd[2293]: Accepted publickey for core from 139.178.68.195 port 59232 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:49:03.520822 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:49:03.528742 systemd-logind[1975]: New session 5 of user core. Sep 10 23:49:03.541538 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 23:49:03.657516 sshd[2295]: Connection closed by 139.178.68.195 port 59232 Sep 10 23:49:03.658377 sshd-session[2293]: pam_unix(sshd:session): session closed for user core Sep 10 23:49:03.664895 systemd[1]: sshd@4-172.31.28.68:22-139.178.68.195:59232.service: Deactivated successfully. Sep 10 23:49:03.668547 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 23:49:03.670360 systemd-logind[1975]: Session 5 logged out. Waiting for processes to exit. Sep 10 23:49:03.673397 systemd-logind[1975]: Removed session 5. Sep 10 23:49:03.698016 systemd[1]: Started sshd@5-172.31.28.68:22-139.178.68.195:59244.service - OpenSSH per-connection server daemon (139.178.68.195:59244). Sep 10 23:49:03.894916 sshd[2301]: Accepted publickey for core from 139.178.68.195 port 59244 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:49:03.897791 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:49:03.907226 systemd-logind[1975]: New session 6 of user core. Sep 10 23:49:03.913511 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 23:49:04.039096 sshd[2303]: Connection closed by 139.178.68.195 port 59244 Sep 10 23:49:04.039901 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Sep 10 23:49:04.046243 systemd[1]: sshd@5-172.31.28.68:22-139.178.68.195:59244.service: Deactivated successfully. Sep 10 23:49:04.049915 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 23:49:04.052752 systemd-logind[1975]: Session 6 logged out. Waiting for processes to exit. Sep 10 23:49:04.055362 systemd-logind[1975]: Removed session 6. Sep 10 23:49:04.071577 systemd[1]: Started sshd@6-172.31.28.68:22-139.178.68.195:59254.service - OpenSSH per-connection server daemon (139.178.68.195:59254). Sep 10 23:49:04.275511 sshd[2309]: Accepted publickey for core from 139.178.68.195 port 59254 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:49:04.278113 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:49:04.288338 systemd-logind[1975]: New session 7 of user core. Sep 10 23:49:04.295549 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 23:49:04.418422 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 23:49:04.419023 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:49:04.439302 sudo[2312]: pam_unix(sudo:session): session closed for user root Sep 10 23:49:04.462409 sshd[2311]: Connection closed by 139.178.68.195 port 59254 Sep 10 23:49:04.463471 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Sep 10 23:49:04.470175 systemd[1]: sshd@6-172.31.28.68:22-139.178.68.195:59254.service: Deactivated successfully. Sep 10 23:49:04.470894 systemd-logind[1975]: Session 7 logged out. Waiting for processes to exit. Sep 10 23:49:04.474032 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 23:49:04.479724 systemd-logind[1975]: Removed session 7. Sep 10 23:49:04.497932 systemd[1]: Started sshd@7-172.31.28.68:22-139.178.68.195:59260.service - OpenSSH per-connection server daemon (139.178.68.195:59260). Sep 10 23:49:04.690441 sshd[2318]: Accepted publickey for core from 139.178.68.195 port 59260 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:49:04.693568 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:49:04.703311 systemd-logind[1975]: New session 8 of user core. Sep 10 23:49:04.709530 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 23:49:04.812875 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 23:49:04.814024 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:49:04.821476 sudo[2322]: pam_unix(sudo:session): session closed for user root Sep 10 23:49:04.830862 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 23:49:04.831524 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:49:04.849612 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:49:04.911032 augenrules[2344]: No rules Sep 10 23:49:04.913592 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:49:04.914112 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:49:04.917232 sudo[2321]: pam_unix(sudo:session): session closed for user root Sep 10 23:49:04.939886 sshd[2320]: Connection closed by 139.178.68.195 port 59260 Sep 10 23:49:04.940728 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Sep 10 23:49:04.947709 systemd[1]: sshd@7-172.31.28.68:22-139.178.68.195:59260.service: Deactivated successfully. Sep 10 23:49:04.948465 systemd-logind[1975]: Session 8 logged out. Waiting for processes to exit. Sep 10 23:49:04.951034 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 23:49:04.955953 systemd-logind[1975]: Removed session 8. Sep 10 23:49:04.979359 systemd[1]: Started sshd@8-172.31.28.68:22-139.178.68.195:59262.service - OpenSSH per-connection server daemon (139.178.68.195:59262). Sep 10 23:49:05.175817 sshd[2353]: Accepted publickey for core from 139.178.68.195 port 59262 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:49:05.178303 sshd-session[2353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:49:05.185927 systemd-logind[1975]: New session 9 of user core. Sep 10 23:49:05.206526 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 23:49:05.309079 sudo[2356]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 23:49:05.309733 sudo[2356]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:49:05.685246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 23:49:05.688114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:05.911220 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 23:49:05.926844 (dockerd)[2377]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 23:49:06.082528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:06.097131 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:49:06.200291 kubelet[2383]: E0910 23:49:06.198837 2383 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:49:06.206796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:49:06.207102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:49:06.207702 systemd[1]: kubelet.service: Consumed 336ms CPU time, 105.6M memory peak. Sep 10 23:49:06.378382 dockerd[2377]: time="2025-09-10T23:49:06.377295161Z" level=info msg="Starting up" Sep 10 23:49:06.380578 dockerd[2377]: time="2025-09-10T23:49:06.380526941Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 23:49:06.430342 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport200853153-merged.mount: Deactivated successfully. Sep 10 23:49:06.453403 dockerd[2377]: time="2025-09-10T23:49:06.453326537Z" level=info msg="Loading containers: start." Sep 10 23:49:06.468291 kernel: Initializing XFRM netlink socket Sep 10 23:49:06.790358 (udev-worker)[2410]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:49:06.869370 systemd-networkd[1914]: docker0: Link UP Sep 10 23:49:06.874464 dockerd[2377]: time="2025-09-10T23:49:06.874307467Z" level=info msg="Loading containers: done." Sep 10 23:49:06.899144 dockerd[2377]: time="2025-09-10T23:49:06.898467068Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 23:49:06.899144 dockerd[2377]: time="2025-09-10T23:49:06.898586516Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 10 23:49:06.899144 dockerd[2377]: time="2025-09-10T23:49:06.898768916Z" level=info msg="Initializing buildkit" Sep 10 23:49:06.901838 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2707219420-merged.mount: Deactivated successfully. Sep 10 23:49:06.937606 dockerd[2377]: time="2025-09-10T23:49:06.937537436Z" level=info msg="Completed buildkit initialization" Sep 10 23:49:06.953675 dockerd[2377]: time="2025-09-10T23:49:06.952901444Z" level=info msg="Daemon has completed initialization" Sep 10 23:49:06.953675 dockerd[2377]: time="2025-09-10T23:49:06.953305964Z" level=info msg="API listen on /run/docker.sock" Sep 10 23:49:06.953518 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 23:49:08.462215 containerd[2003]: time="2025-09-10T23:49:08.462158227Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 10 23:49:09.049970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1047137709.mount: Deactivated successfully. Sep 10 23:49:11.253922 containerd[2003]: time="2025-09-10T23:49:11.253842177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:11.256353 containerd[2003]: time="2025-09-10T23:49:11.256298697Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Sep 10 23:49:11.257660 containerd[2003]: time="2025-09-10T23:49:11.257580237Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:11.263396 containerd[2003]: time="2025-09-10T23:49:11.263314461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:11.265088 containerd[2003]: time="2025-09-10T23:49:11.264387345Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.802167774s" Sep 10 23:49:11.265088 containerd[2003]: time="2025-09-10T23:49:11.264444333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 10 23:49:11.265561 containerd[2003]: time="2025-09-10T23:49:11.265508685Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 10 23:49:13.402850 containerd[2003]: time="2025-09-10T23:49:13.402768036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:13.404987 containerd[2003]: time="2025-09-10T23:49:13.404921076Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Sep 10 23:49:13.406044 containerd[2003]: time="2025-09-10T23:49:13.405987504Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:13.412305 containerd[2003]: time="2025-09-10T23:49:13.412205352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:13.414490 containerd[2003]: time="2025-09-10T23:49:13.414145992Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 2.148580115s" Sep 10 23:49:13.414490 containerd[2003]: time="2025-09-10T23:49:13.414204924Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 10 23:49:13.414991 containerd[2003]: time="2025-09-10T23:49:13.414957132Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 10 23:49:15.101924 containerd[2003]: time="2025-09-10T23:49:15.101843748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:15.104877 containerd[2003]: time="2025-09-10T23:49:15.104810616Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Sep 10 23:49:15.105808 containerd[2003]: time="2025-09-10T23:49:15.105752328Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:15.113150 containerd[2003]: time="2025-09-10T23:49:15.113071908Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.697981624s" Sep 10 23:49:15.113150 containerd[2003]: time="2025-09-10T23:49:15.113135100Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 10 23:49:15.113579 containerd[2003]: time="2025-09-10T23:49:15.111108252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:15.114297 containerd[2003]: time="2025-09-10T23:49:15.114084636Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 10 23:49:16.431348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19284277.mount: Deactivated successfully. Sep 10 23:49:16.433507 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 23:49:16.437613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:16.817844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:16.831774 (kubelet)[2677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:49:16.925165 kubelet[2677]: E0910 23:49:16.925081 2677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:49:16.932642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:49:16.932949 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:49:16.933943 systemd[1]: kubelet.service: Consumed 318ms CPU time, 105.4M memory peak. Sep 10 23:49:17.226044 containerd[2003]: time="2025-09-10T23:49:17.225876255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:17.227756 containerd[2003]: time="2025-09-10T23:49:17.227683731Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Sep 10 23:49:17.229275 containerd[2003]: time="2025-09-10T23:49:17.229181379Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:17.231779 containerd[2003]: time="2025-09-10T23:49:17.231685119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:17.233288 containerd[2003]: time="2025-09-10T23:49:17.233027739Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 2.118889979s" Sep 10 23:49:17.233288 containerd[2003]: time="2025-09-10T23:49:17.233082855Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 10 23:49:17.233878 containerd[2003]: time="2025-09-10T23:49:17.233842755Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 23:49:17.780965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2962486889.mount: Deactivated successfully. Sep 10 23:49:19.035875 containerd[2003]: time="2025-09-10T23:49:19.035794480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:19.038097 containerd[2003]: time="2025-09-10T23:49:19.037659196Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 10 23:49:19.040353 containerd[2003]: time="2025-09-10T23:49:19.040298476Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:19.045852 containerd[2003]: time="2025-09-10T23:49:19.045790084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:19.047951 containerd[2003]: time="2025-09-10T23:49:19.047893576Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.813386741s" Sep 10 23:49:19.048075 containerd[2003]: time="2025-09-10T23:49:19.047950384Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 10 23:49:19.048572 containerd[2003]: time="2025-09-10T23:49:19.048527188Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 23:49:19.543233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9424029.mount: Deactivated successfully. Sep 10 23:49:19.557318 containerd[2003]: time="2025-09-10T23:49:19.556745406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:49:19.559197 containerd[2003]: time="2025-09-10T23:49:19.558777630Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 10 23:49:19.561417 containerd[2003]: time="2025-09-10T23:49:19.561359310Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:49:19.567320 containerd[2003]: time="2025-09-10T23:49:19.567247122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:49:19.568539 containerd[2003]: time="2025-09-10T23:49:19.568483722Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 519.142922ms" Sep 10 23:49:19.568641 containerd[2003]: time="2025-09-10T23:49:19.568538910Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 23:49:19.569197 containerd[2003]: time="2025-09-10T23:49:19.569135634Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 10 23:49:20.289761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556959282.mount: Deactivated successfully. Sep 10 23:49:20.952286 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 10 23:49:23.507069 containerd[2003]: time="2025-09-10T23:49:23.506986318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:23.509171 containerd[2003]: time="2025-09-10T23:49:23.508936714Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 10 23:49:23.510190 containerd[2003]: time="2025-09-10T23:49:23.510134806Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:23.515369 containerd[2003]: time="2025-09-10T23:49:23.515298562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:23.519586 containerd[2003]: time="2025-09-10T23:49:23.519516406Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.95032964s" Sep 10 23:49:23.519586 containerd[2003]: time="2025-09-10T23:49:23.519572650Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 10 23:49:27.027349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 10 23:49:27.031580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:27.378488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:27.389866 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:49:27.466768 kubelet[2826]: E0910 23:49:27.466669 2826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:49:27.470953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:49:27.471354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:49:27.471991 systemd[1]: kubelet.service: Consumed 297ms CPU time, 105.2M memory peak. Sep 10 23:49:31.962354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:31.962799 systemd[1]: kubelet.service: Consumed 297ms CPU time, 105.2M memory peak. Sep 10 23:49:31.967029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:32.016965 systemd[1]: Reload requested from client PID 2840 ('systemctl') (unit session-9.scope)... Sep 10 23:49:32.016998 systemd[1]: Reloading... Sep 10 23:49:32.262309 zram_generator::config[2887]: No configuration found. Sep 10 23:49:32.464144 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:49:32.721647 systemd[1]: Reloading finished in 703 ms. Sep 10 23:49:32.811430 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 23:49:32.811798 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 23:49:32.812496 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:32.813413 systemd[1]: kubelet.service: Consumed 228ms CPU time, 95M memory peak. Sep 10 23:49:32.817931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:33.151190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:33.168830 (kubelet)[2948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:49:33.254369 kubelet[2948]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:49:33.254369 kubelet[2948]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:49:33.254369 kubelet[2948]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:49:33.254897 kubelet[2948]: I0910 23:49:33.254476 2948 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:49:34.542381 update_engine[1978]: I20250910 23:49:34.542287 1978 update_attempter.cc:509] Updating boot flags... Sep 10 23:49:34.670288 kubelet[2948]: I0910 23:49:34.668501 2948 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 23:49:34.670288 kubelet[2948]: I0910 23:49:34.668553 2948 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:49:34.670288 kubelet[2948]: I0910 23:49:34.669023 2948 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 23:49:34.745038 kubelet[2948]: E0910 23:49:34.744964 2948 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.68:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:49:34.755936 kubelet[2948]: I0910 23:49:34.755887 2948 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:49:34.777608 kubelet[2948]: I0910 23:49:34.777513 2948 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:49:34.795766 kubelet[2948]: I0910 23:49:34.795626 2948 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:49:34.796170 kubelet[2948]: I0910 23:49:34.796107 2948 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:49:34.802617 kubelet[2948]: I0910 23:49:34.796167 2948 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-68","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:49:34.802617 kubelet[2948]: I0910 23:49:34.801919 2948 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:49:34.802617 kubelet[2948]: I0910 23:49:34.801945 2948 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 23:49:34.802617 kubelet[2948]: I0910 23:49:34.802317 2948 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:49:34.813288 kubelet[2948]: I0910 23:49:34.811342 2948 kubelet.go:446] "Attempting to sync node with API server" Sep 10 23:49:34.813288 kubelet[2948]: I0910 23:49:34.811583 2948 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:49:34.813288 kubelet[2948]: I0910 23:49:34.811629 2948 kubelet.go:352] "Adding apiserver pod source" Sep 10 23:49:34.813288 kubelet[2948]: I0910 23:49:34.811650 2948 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:49:34.816614 kubelet[2948]: W0910 23:49:34.816543 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-68&limit=500&resourceVersion=0": dial tcp 172.31.28.68:6443: connect: connection refused Sep 10 23:49:34.819078 kubelet[2948]: E0910 23:49:34.818847 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-68&limit=500&resourceVersion=0\": dial tcp 172.31.28.68:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:49:34.827293 kubelet[2948]: W0910 23:49:34.823199 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.68:6443: connect: connection refused Sep 10 23:49:34.827293 kubelet[2948]: E0910 23:49:34.823291 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.68:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:49:34.827293 kubelet[2948]: I0910 23:49:34.823797 2948 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:49:34.827293 kubelet[2948]: I0910 23:49:34.824857 2948 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:49:34.827293 kubelet[2948]: W0910 23:49:34.825079 2948 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 23:49:34.827640 kubelet[2948]: I0910 23:49:34.827394 2948 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:49:34.827640 kubelet[2948]: I0910 23:49:34.827448 2948 server.go:1287] "Started kubelet" Sep 10 23:49:34.836578 kubelet[2948]: E0910 23:49:34.835092 2948 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.68:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.68:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-68.186410c0cc3970c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-68,UID:ip-172-31-28-68,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-68,},FirstTimestamp:2025-09-10 23:49:34.827417794 +0000 UTC m=+1.651866609,LastTimestamp:2025-09-10 23:49:34.827417794 +0000 UTC m=+1.651866609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-68,}" Sep 10 23:49:34.840294 kubelet[2948]: I0910 23:49:34.839070 2948 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:49:34.855861 kubelet[2948]: I0910 23:49:34.855089 2948 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:49:34.859276 kubelet[2948]: I0910 23:49:34.859129 2948 server.go:479] "Adding debug handlers to kubelet server" Sep 10 23:49:34.884589 kubelet[2948]: I0910 23:49:34.884455 2948 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:49:34.884911 kubelet[2948]: I0910 23:49:34.884871 2948 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:49:34.893274 kubelet[2948]: I0910 23:49:34.892173 2948 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:49:34.898823 kubelet[2948]: I0910 23:49:34.898788 2948 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:49:34.907885 kubelet[2948]: I0910 23:49:34.898923 2948 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:49:34.907885 kubelet[2948]: E0910 23:49:34.901714 2948 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-68\" not found" Sep 10 23:49:34.908631 kubelet[2948]: I0910 23:49:34.908417 2948 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:49:34.909304 kubelet[2948]: W0910 23:49:34.908751 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.68:6443: connect: connection refused Sep 10 23:49:34.909304 kubelet[2948]: E0910 23:49:34.908879 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.68:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:49:34.909785 kubelet[2948]: E0910 23:49:34.909694 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-68?timeout=10s\": dial tcp 172.31.28.68:6443: connect: connection refused" interval="200ms" Sep 10 23:49:34.910083 kubelet[2948]: I0910 23:49:34.910045 2948 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:49:34.910636 kubelet[2948]: I0910 23:49:34.910588 2948 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:49:34.915590 kubelet[2948]: I0910 23:49:34.915134 2948 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:49:34.946649 kubelet[2948]: E0910 23:49:34.946590 2948 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:49:35.018283 kubelet[2948]: E0910 23:49:35.016490 2948 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-68\" not found" Sep 10 23:49:35.032380 kubelet[2948]: I0910 23:49:35.032326 2948 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:49:35.042428 kubelet[2948]: I0910 23:49:35.042384 2948 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:49:35.042618 kubelet[2948]: I0910 23:49:35.042598 2948 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 23:49:35.045408 kubelet[2948]: I0910 23:49:35.045371 2948 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:49:35.045664 kubelet[2948]: I0910 23:49:35.045591 2948 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 23:49:35.046680 kubelet[2948]: E0910 23:49:35.045986 2948 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:49:35.052949 kubelet[2948]: W0910 23:49:35.052490 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.68:6443: connect: connection refused Sep 10 23:49:35.055203 kubelet[2948]: E0910 23:49:35.055036 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.68:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:49:35.096303 kubelet[2948]: I0910 23:49:35.096239 2948 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:49:35.097132 kubelet[2948]: I0910 23:49:35.096536 2948 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:49:35.097132 kubelet[2948]: I0910 23:49:35.096570 2948 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:49:35.101810 kubelet[2948]: I0910 23:49:35.101379 2948 policy_none.go:49] "None policy: Start" Sep 10 23:49:35.101810 kubelet[2948]: I0910 23:49:35.101424 2948 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:49:35.101810 kubelet[2948]: I0910 23:49:35.101447 2948 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:49:35.111051 kubelet[2948]: E0910 23:49:35.110997 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-68?timeout=10s\": dial tcp 172.31.28.68:6443: connect: connection refused" interval="400ms" Sep 10 23:49:35.116638 kubelet[2948]: E0910 23:49:35.116585 2948 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-68\" not found" Sep 10 23:49:35.117277 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 23:49:35.140094 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 23:49:35.150834 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 23:49:35.152275 kubelet[2948]: E0910 23:49:35.151719 2948 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 23:49:35.159623 kubelet[2948]: I0910 23:49:35.159507 2948 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:49:35.159870 kubelet[2948]: I0910 23:49:35.159822 2948 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:49:35.159939 kubelet[2948]: I0910 23:49:35.159855 2948 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:49:35.162612 kubelet[2948]: I0910 23:49:35.161791 2948 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:49:35.166421 kubelet[2948]: E0910 23:49:35.166385 2948 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:49:35.167357 kubelet[2948]: E0910 23:49:35.167125 2948 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-68\" not found" Sep 10 23:49:35.260700 kubelet[2948]: E0910 23:49:35.259188 2948 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.68:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.68:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-68.186410c0cc3970c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-68,UID:ip-172-31-28-68,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-68,},FirstTimestamp:2025-09-10 23:49:34.827417794 +0000 UTC m=+1.651866609,LastTimestamp:2025-09-10 23:49:34.827417794 +0000 UTC m=+1.651866609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-68,}" Sep 10 23:49:35.263959 kubelet[2948]: I0910 23:49:35.263857 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-68" Sep 10 23:49:35.266111 kubelet[2948]: E0910 23:49:35.266047 2948 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.68:6443/api/v1/nodes\": dial tcp 172.31.28.68:6443: connect: connection refused" node="ip-172-31-28-68" Sep 10 23:49:35.418616 kubelet[2948]: I0910 23:49:35.417725 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d90eefe7f7a14f6d4aa1c1db9b533b96-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-68\" (UID: \"d90eefe7f7a14f6d4aa1c1db9b533b96\") " pod="kube-system/kube-apiserver-ip-172-31-28-68" Sep 10 23:49:35.421450 kubelet[2948]: I0910 23:49:35.420858 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bbeb190760ce202d970ba83cf157815e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-68\" (UID: \"bbeb190760ce202d970ba83cf157815e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:35.421450 kubelet[2948]: I0910 23:49:35.420927 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bbeb190760ce202d970ba83cf157815e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-68\" (UID: \"bbeb190760ce202d970ba83cf157815e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:35.421450 kubelet[2948]: I0910 23:49:35.420967 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03a837b0e04d2baa148cd6a78a51bf7d-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-68\" (UID: \"03a837b0e04d2baa148cd6a78a51bf7d\") " pod="kube-system/kube-scheduler-ip-172-31-28-68" Sep 10 23:49:35.421450 kubelet[2948]: I0910 23:49:35.421005 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d90eefe7f7a14f6d4aa1c1db9b533b96-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-68\" (UID: \"d90eefe7f7a14f6d4aa1c1db9b533b96\") " pod="kube-system/kube-apiserver-ip-172-31-28-68" Sep 10 23:49:35.421450 kubelet[2948]: I0910 23:49:35.421046 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bbeb190760ce202d970ba83cf157815e-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-68\" (UID: \"bbeb190760ce202d970ba83cf157815e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:35.421781 kubelet[2948]: I0910 23:49:35.421082 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bbeb190760ce202d970ba83cf157815e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-68\" (UID: \"bbeb190760ce202d970ba83cf157815e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:35.421781 kubelet[2948]: I0910 23:49:35.421137 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bbeb190760ce202d970ba83cf157815e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-68\" (UID: \"bbeb190760ce202d970ba83cf157815e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:35.421781 kubelet[2948]: I0910 23:49:35.421176 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d90eefe7f7a14f6d4aa1c1db9b533b96-ca-certs\") pod \"kube-apiserver-ip-172-31-28-68\" (UID: \"d90eefe7f7a14f6d4aa1c1db9b533b96\") " pod="kube-system/kube-apiserver-ip-172-31-28-68" Sep 10 23:49:35.467921 systemd[1]: Created slice kubepods-burstable-podd90eefe7f7a14f6d4aa1c1db9b533b96.slice - libcontainer container kubepods-burstable-podd90eefe7f7a14f6d4aa1c1db9b533b96.slice. Sep 10 23:49:35.472727 kubelet[2948]: I0910 23:49:35.472560 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-68" Sep 10 23:49:35.474627 kubelet[2948]: E0910 23:49:35.474546 2948 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.68:6443/api/v1/nodes\": dial tcp 172.31.28.68:6443: connect: connection refused" node="ip-172-31-28-68" Sep 10 23:49:35.490699 kubelet[2948]: E0910 23:49:35.490245 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-68\" not found" node="ip-172-31-28-68" Sep 10 23:49:35.503200 systemd[1]: Created slice kubepods-burstable-podbbeb190760ce202d970ba83cf157815e.slice - libcontainer container kubepods-burstable-podbbeb190760ce202d970ba83cf157815e.slice. Sep 10 23:49:35.509472 kubelet[2948]: E0910 23:49:35.509417 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-68\" not found" node="ip-172-31-28-68" Sep 10 23:49:35.516000 kubelet[2948]: E0910 23:49:35.513452 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-68?timeout=10s\": dial tcp 172.31.28.68:6443: connect: connection refused" interval="800ms" Sep 10 23:49:35.514828 systemd[1]: Created slice kubepods-burstable-pod03a837b0e04d2baa148cd6a78a51bf7d.slice - libcontainer container kubepods-burstable-pod03a837b0e04d2baa148cd6a78a51bf7d.slice. Sep 10 23:49:35.520472 kubelet[2948]: E0910 23:49:35.520431 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-68\" not found" node="ip-172-31-28-68" Sep 10 23:49:35.737777 kubelet[2948]: W0910 23:49:35.737430 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.68:6443: connect: connection refused Sep 10 23:49:35.737777 kubelet[2948]: E0910 23:49:35.737528 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.68:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:49:35.789904 kubelet[2948]: W0910 23:49:35.789771 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.68:6443: connect: connection refused Sep 10 23:49:35.789904 kubelet[2948]: E0910 23:49:35.789867 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.68:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:49:35.792626 containerd[2003]: time="2025-09-10T23:49:35.792561827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-68,Uid:d90eefe7f7a14f6d4aa1c1db9b533b96,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:35.813572 containerd[2003]: time="2025-09-10T23:49:35.813504311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-68,Uid:bbeb190760ce202d970ba83cf157815e,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:35.830903 containerd[2003]: time="2025-09-10T23:49:35.830638607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-68,Uid:03a837b0e04d2baa148cd6a78a51bf7d,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:35.836395 containerd[2003]: time="2025-09-10T23:49:35.836210543Z" level=info msg="connecting to shim e486f3ea25c21132bb5effd13b52be47fee3aa6d2e1400e158b36b7e91398747" address="unix:///run/containerd/s/fa4be4c0e9d861b368e1378fca4e4486e849e0cb1511b9557ea5d397da4e7cb3" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:35.879065 kubelet[2948]: I0910 23:49:35.878651 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-68" Sep 10 23:49:35.879230 kubelet[2948]: E0910 23:49:35.879162 2948 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.68:6443/api/v1/nodes\": dial tcp 172.31.28.68:6443: connect: connection refused" node="ip-172-31-28-68" Sep 10 23:49:35.900107 containerd[2003]: time="2025-09-10T23:49:35.900027348Z" level=info msg="connecting to shim 080dcdebefa61286a396facef1badcf593357a0138a256c71fac075e6b31ceee" address="unix:///run/containerd/s/df8295a82863a08f56a141eac0823bbe8c482caf2ce487cc5a128582f257c504" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:35.929674 systemd[1]: Started cri-containerd-e486f3ea25c21132bb5effd13b52be47fee3aa6d2e1400e158b36b7e91398747.scope - libcontainer container e486f3ea25c21132bb5effd13b52be47fee3aa6d2e1400e158b36b7e91398747. Sep 10 23:49:35.942290 containerd[2003]: time="2025-09-10T23:49:35.941522448Z" level=info msg="connecting to shim c5750422c6a4bbd1434be48ecb07166b214be4152f836db3ee868c50fae074dc" address="unix:///run/containerd/s/77e389dce019b8358e7b7cec0fac7447f4f74499b2da34ebb15946af291bf560" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:36.002688 systemd[1]: Started cri-containerd-080dcdebefa61286a396facef1badcf593357a0138a256c71fac075e6b31ceee.scope - libcontainer container 080dcdebefa61286a396facef1badcf593357a0138a256c71fac075e6b31ceee. Sep 10 23:49:36.031575 systemd[1]: Started cri-containerd-c5750422c6a4bbd1434be48ecb07166b214be4152f836db3ee868c50fae074dc.scope - libcontainer container c5750422c6a4bbd1434be48ecb07166b214be4152f836db3ee868c50fae074dc. Sep 10 23:49:36.084409 kubelet[2948]: W0910 23:49:36.084309 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-68&limit=500&resourceVersion=0": dial tcp 172.31.28.68:6443: connect: connection refused Sep 10 23:49:36.084910 kubelet[2948]: E0910 23:49:36.084497 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-68&limit=500&resourceVersion=0\": dial tcp 172.31.28.68:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:49:36.100611 containerd[2003]: time="2025-09-10T23:49:36.100406829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-68,Uid:d90eefe7f7a14f6d4aa1c1db9b533b96,Namespace:kube-system,Attempt:0,} returns sandbox id \"e486f3ea25c21132bb5effd13b52be47fee3aa6d2e1400e158b36b7e91398747\"" Sep 10 23:49:36.111714 containerd[2003]: time="2025-09-10T23:49:36.111638349Z" level=info msg="CreateContainer within sandbox \"e486f3ea25c21132bb5effd13b52be47fee3aa6d2e1400e158b36b7e91398747\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 23:49:36.128187 containerd[2003]: time="2025-09-10T23:49:36.128107629Z" level=info msg="Container c265ae6da85cef0737b0047ca9136c95c0f65bc84ad3cefdf4267a419766e46c: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:36.146951 containerd[2003]: time="2025-09-10T23:49:36.146899761Z" level=info msg="CreateContainer within sandbox \"e486f3ea25c21132bb5effd13b52be47fee3aa6d2e1400e158b36b7e91398747\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c265ae6da85cef0737b0047ca9136c95c0f65bc84ad3cefdf4267a419766e46c\"" Sep 10 23:49:36.152197 containerd[2003]: time="2025-09-10T23:49:36.152103561Z" level=info msg="StartContainer for \"c265ae6da85cef0737b0047ca9136c95c0f65bc84ad3cefdf4267a419766e46c\"" Sep 10 23:49:36.159965 containerd[2003]: time="2025-09-10T23:49:36.157061253Z" level=info msg="connecting to shim c265ae6da85cef0737b0047ca9136c95c0f65bc84ad3cefdf4267a419766e46c" address="unix:///run/containerd/s/fa4be4c0e9d861b368e1378fca4e4486e849e0cb1511b9557ea5d397da4e7cb3" protocol=ttrpc version=3 Sep 10 23:49:36.166380 containerd[2003]: time="2025-09-10T23:49:36.166317381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-68,Uid:bbeb190760ce202d970ba83cf157815e,Namespace:kube-system,Attempt:0,} returns sandbox id \"080dcdebefa61286a396facef1badcf593357a0138a256c71fac075e6b31ceee\"" Sep 10 23:49:36.168376 kubelet[2948]: W0910 23:49:36.168232 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.68:6443: connect: connection refused Sep 10 23:49:36.168732 kubelet[2948]: E0910 23:49:36.168583 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.68:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:49:36.171663 containerd[2003]: time="2025-09-10T23:49:36.171561405Z" level=info msg="CreateContainer within sandbox \"080dcdebefa61286a396facef1badcf593357a0138a256c71fac075e6b31ceee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 23:49:36.186817 containerd[2003]: time="2025-09-10T23:49:36.185547453Z" level=info msg="Container 5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:36.203905 containerd[2003]: time="2025-09-10T23:49:36.203843409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-68,Uid:03a837b0e04d2baa148cd6a78a51bf7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5750422c6a4bbd1434be48ecb07166b214be4152f836db3ee868c50fae074dc\"" Sep 10 23:49:36.208973 containerd[2003]: time="2025-09-10T23:49:36.208897365Z" level=info msg="CreateContainer within sandbox \"c5750422c6a4bbd1434be48ecb07166b214be4152f836db3ee868c50fae074dc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 23:49:36.215913 containerd[2003]: time="2025-09-10T23:49:36.215233533Z" level=info msg="CreateContainer within sandbox \"080dcdebefa61286a396facef1badcf593357a0138a256c71fac075e6b31ceee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3\"" Sep 10 23:49:36.217921 containerd[2003]: time="2025-09-10T23:49:36.217479105Z" level=info msg="StartContainer for \"5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3\"" Sep 10 23:49:36.218572 systemd[1]: Started cri-containerd-c265ae6da85cef0737b0047ca9136c95c0f65bc84ad3cefdf4267a419766e46c.scope - libcontainer container c265ae6da85cef0737b0047ca9136c95c0f65bc84ad3cefdf4267a419766e46c. Sep 10 23:49:36.224741 containerd[2003]: time="2025-09-10T23:49:36.224665077Z" level=info msg="connecting to shim 5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3" address="unix:///run/containerd/s/df8295a82863a08f56a141eac0823bbe8c482caf2ce487cc5a128582f257c504" protocol=ttrpc version=3 Sep 10 23:49:36.238622 containerd[2003]: time="2025-09-10T23:49:36.238568913Z" level=info msg="Container 24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:36.257723 containerd[2003]: time="2025-09-10T23:49:36.257554749Z" level=info msg="CreateContainer within sandbox \"c5750422c6a4bbd1434be48ecb07166b214be4152f836db3ee868c50fae074dc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17\"" Sep 10 23:49:36.261229 containerd[2003]: time="2025-09-10T23:49:36.259400901Z" level=info msg="StartContainer for \"24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17\"" Sep 10 23:49:36.262435 containerd[2003]: time="2025-09-10T23:49:36.262324473Z" level=info msg="connecting to shim 24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17" address="unix:///run/containerd/s/77e389dce019b8358e7b7cec0fac7447f4f74499b2da34ebb15946af291bf560" protocol=ttrpc version=3 Sep 10 23:49:36.286653 systemd[1]: Started cri-containerd-5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3.scope - libcontainer container 5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3. Sep 10 23:49:36.315868 kubelet[2948]: E0910 23:49:36.315798 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-68?timeout=10s\": dial tcp 172.31.28.68:6443: connect: connection refused" interval="1.6s" Sep 10 23:49:36.319600 systemd[1]: Started cri-containerd-24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17.scope - libcontainer container 24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17. Sep 10 23:49:36.390272 containerd[2003]: time="2025-09-10T23:49:36.390185626Z" level=info msg="StartContainer for \"c265ae6da85cef0737b0047ca9136c95c0f65bc84ad3cefdf4267a419766e46c\" returns successfully" Sep 10 23:49:36.445481 containerd[2003]: time="2025-09-10T23:49:36.445434970Z" level=info msg="StartContainer for \"5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3\" returns successfully" Sep 10 23:49:36.526772 containerd[2003]: time="2025-09-10T23:49:36.526599683Z" level=info msg="StartContainer for \"24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17\" returns successfully" Sep 10 23:49:36.683823 kubelet[2948]: I0910 23:49:36.683398 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-68" Sep 10 23:49:37.082856 kubelet[2948]: E0910 23:49:37.082548 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-68\" not found" node="ip-172-31-28-68" Sep 10 23:49:37.090085 kubelet[2948]: E0910 23:49:37.089861 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-68\" not found" node="ip-172-31-28-68" Sep 10 23:49:37.095322 kubelet[2948]: E0910 23:49:37.094801 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-68\" not found" node="ip-172-31-28-68" Sep 10 23:49:38.097678 kubelet[2948]: E0910 23:49:38.097132 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-68\" not found" node="ip-172-31-28-68" Sep 10 23:49:38.100413 kubelet[2948]: E0910 23:49:38.099665 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-68\" not found" node="ip-172-31-28-68" Sep 10 23:49:38.101305 kubelet[2948]: E0910 23:49:38.100990 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-68\" not found" node="ip-172-31-28-68" Sep 10 23:49:40.776954 kubelet[2948]: I0910 23:49:40.776571 2948 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-68" Sep 10 23:49:40.803645 kubelet[2948]: I0910 23:49:40.803587 2948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-68" Sep 10 23:49:40.830484 kubelet[2948]: I0910 23:49:40.830428 2948 apiserver.go:52] "Watching apiserver" Sep 10 23:49:40.859871 kubelet[2948]: E0910 23:49:40.859128 2948 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-68\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-68" Sep 10 23:49:40.859871 kubelet[2948]: I0910 23:49:40.859194 2948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:40.864704 kubelet[2948]: E0910 23:49:40.864644 2948 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-68\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:40.864704 kubelet[2948]: I0910 23:49:40.864695 2948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-68" Sep 10 23:49:40.869088 kubelet[2948]: E0910 23:49:40.869022 2948 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-68\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-68" Sep 10 23:49:40.908500 kubelet[2948]: I0910 23:49:40.908428 2948 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:49:42.656779 systemd[1]: Reload requested from client PID 3493 ('systemctl') (unit session-9.scope)... Sep 10 23:49:42.656804 systemd[1]: Reloading... Sep 10 23:49:42.860364 zram_generator::config[3540]: No configuration found. Sep 10 23:49:43.057522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:49:43.351680 systemd[1]: Reloading finished in 694 ms. Sep 10 23:49:43.409774 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:43.428497 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:49:43.430333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:43.430428 systemd[1]: kubelet.service: Consumed 2.319s CPU time, 126.3M memory peak. Sep 10 23:49:43.435701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:43.832185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:43.855055 (kubelet)[3597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:49:43.974745 kubelet[3597]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:49:43.974745 kubelet[3597]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:49:43.974745 kubelet[3597]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:49:43.975307 kubelet[3597]: I0910 23:49:43.974884 3597 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:49:43.991159 kubelet[3597]: I0910 23:49:43.991096 3597 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 23:49:43.991159 kubelet[3597]: I0910 23:49:43.991147 3597 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:49:43.991774 kubelet[3597]: I0910 23:49:43.991734 3597 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 23:49:43.998825 kubelet[3597]: I0910 23:49:43.997443 3597 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 23:49:44.004279 kubelet[3597]: I0910 23:49:44.004210 3597 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:49:44.015812 kubelet[3597]: I0910 23:49:44.014871 3597 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:49:44.022302 kubelet[3597]: I0910 23:49:44.022243 3597 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:49:44.023298 kubelet[3597]: I0910 23:49:44.022867 3597 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:49:44.023715 kubelet[3597]: I0910 23:49:44.023435 3597 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-68","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:49:44.025022 kubelet[3597]: I0910 23:49:44.024996 3597 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:49:44.025180 kubelet[3597]: I0910 23:49:44.025163 3597 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 23:49:44.025375 kubelet[3597]: I0910 23:49:44.025356 3597 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:49:44.026566 kubelet[3597]: I0910 23:49:44.026380 3597 kubelet.go:446] "Attempting to sync node with API server" Sep 10 23:49:44.026566 kubelet[3597]: I0910 23:49:44.026414 3597 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:49:44.026566 kubelet[3597]: I0910 23:49:44.026453 3597 kubelet.go:352] "Adding apiserver pod source" Sep 10 23:49:44.026566 kubelet[3597]: I0910 23:49:44.026478 3597 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:49:44.035587 kubelet[3597]: I0910 23:49:44.033574 3597 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:49:44.035587 kubelet[3597]: I0910 23:49:44.035363 3597 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:49:44.037054 kubelet[3597]: I0910 23:49:44.036981 3597 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:49:44.037168 kubelet[3597]: I0910 23:49:44.037067 3597 server.go:1287] "Started kubelet" Sep 10 23:49:44.048607 kubelet[3597]: I0910 23:49:44.048155 3597 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:49:44.057288 kubelet[3597]: I0910 23:49:44.054050 3597 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:49:44.070620 kubelet[3597]: I0910 23:49:44.070559 3597 server.go:479] "Adding debug handlers to kubelet server" Sep 10 23:49:44.078712 kubelet[3597]: I0910 23:49:44.078610 3597 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:49:44.079118 kubelet[3597]: I0910 23:49:44.079078 3597 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:49:44.081385 kubelet[3597]: I0910 23:49:44.081334 3597 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:49:44.097382 kubelet[3597]: I0910 23:49:44.096327 3597 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:49:44.098157 kubelet[3597]: E0910 23:49:44.097528 3597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-68\" not found" Sep 10 23:49:44.103801 kubelet[3597]: I0910 23:49:44.103750 3597 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:49:44.104032 kubelet[3597]: I0910 23:49:44.104000 3597 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:49:44.128319 kubelet[3597]: I0910 23:49:44.127098 3597 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:49:44.128319 kubelet[3597]: I0910 23:49:44.127309 3597 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:49:44.154601 kubelet[3597]: I0910 23:49:44.154443 3597 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:49:44.159811 kubelet[3597]: I0910 23:49:44.159769 3597 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:49:44.159997 kubelet[3597]: I0910 23:49:44.159980 3597 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 23:49:44.160602 kubelet[3597]: I0910 23:49:44.160117 3597 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:49:44.160602 kubelet[3597]: I0910 23:49:44.160142 3597 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 23:49:44.160602 kubelet[3597]: E0910 23:49:44.160218 3597 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:49:44.164311 kubelet[3597]: E0910 23:49:44.163816 3597 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:49:44.168602 kubelet[3597]: I0910 23:49:44.167847 3597 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:49:44.263167 kubelet[3597]: E0910 23:49:44.262941 3597 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 23:49:44.317509 kubelet[3597]: I0910 23:49:44.317477 3597 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:49:44.318423 kubelet[3597]: I0910 23:49:44.317681 3597 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:49:44.318423 kubelet[3597]: I0910 23:49:44.317719 3597 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:49:44.318423 kubelet[3597]: I0910 23:49:44.317996 3597 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 23:49:44.318423 kubelet[3597]: I0910 23:49:44.318016 3597 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 23:49:44.318423 kubelet[3597]: I0910 23:49:44.318047 3597 policy_none.go:49] "None policy: Start" Sep 10 23:49:44.318423 kubelet[3597]: I0910 23:49:44.318064 3597 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:49:44.318423 kubelet[3597]: I0910 23:49:44.318084 3597 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:49:44.318423 kubelet[3597]: I0910 23:49:44.318309 3597 state_mem.go:75] "Updated machine memory state" Sep 10 23:49:44.329657 kubelet[3597]: I0910 23:49:44.328921 3597 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:49:44.330838 kubelet[3597]: I0910 23:49:44.330735 3597 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:49:44.331150 kubelet[3597]: I0910 23:49:44.330764 3597 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:49:44.336775 kubelet[3597]: I0910 23:49:44.336515 3597 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:49:44.349447 kubelet[3597]: E0910 23:49:44.345635 3597 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:49:44.459951 kubelet[3597]: I0910 23:49:44.459612 3597 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-68" Sep 10 23:49:44.463831 kubelet[3597]: I0910 23:49:44.463793 3597 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-68" Sep 10 23:49:44.466033 kubelet[3597]: I0910 23:49:44.464328 3597 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-68" Sep 10 23:49:44.466599 kubelet[3597]: I0910 23:49:44.464746 3597 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:44.485923 kubelet[3597]: I0910 23:49:44.485392 3597 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-68" Sep 10 23:49:44.486209 kubelet[3597]: I0910 23:49:44.486185 3597 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-68" Sep 10 23:49:44.511280 kubelet[3597]: I0910 23:49:44.511072 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bbeb190760ce202d970ba83cf157815e-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-68\" (UID: \"bbeb190760ce202d970ba83cf157815e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:44.511280 kubelet[3597]: I0910 23:49:44.511139 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bbeb190760ce202d970ba83cf157815e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-68\" (UID: \"bbeb190760ce202d970ba83cf157815e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:44.511280 kubelet[3597]: I0910 23:49:44.511177 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bbeb190760ce202d970ba83cf157815e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-68\" (UID: \"bbeb190760ce202d970ba83cf157815e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:44.511280 kubelet[3597]: I0910 23:49:44.511228 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03a837b0e04d2baa148cd6a78a51bf7d-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-68\" (UID: \"03a837b0e04d2baa148cd6a78a51bf7d\") " pod="kube-system/kube-scheduler-ip-172-31-28-68" Sep 10 23:49:44.512571 kubelet[3597]: I0910 23:49:44.512446 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bbeb190760ce202d970ba83cf157815e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-68\" (UID: \"bbeb190760ce202d970ba83cf157815e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:44.512902 kubelet[3597]: I0910 23:49:44.512810 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bbeb190760ce202d970ba83cf157815e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-68\" (UID: \"bbeb190760ce202d970ba83cf157815e\") " pod="kube-system/kube-controller-manager-ip-172-31-28-68" Sep 10 23:49:44.513158 kubelet[3597]: I0910 23:49:44.513009 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d90eefe7f7a14f6d4aa1c1db9b533b96-ca-certs\") pod \"kube-apiserver-ip-172-31-28-68\" (UID: \"d90eefe7f7a14f6d4aa1c1db9b533b96\") " pod="kube-system/kube-apiserver-ip-172-31-28-68" Sep 10 23:49:44.513357 kubelet[3597]: I0910 23:49:44.513309 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d90eefe7f7a14f6d4aa1c1db9b533b96-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-68\" (UID: \"d90eefe7f7a14f6d4aa1c1db9b533b96\") " pod="kube-system/kube-apiserver-ip-172-31-28-68" Sep 10 23:49:44.514448 kubelet[3597]: I0910 23:49:44.514347 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d90eefe7f7a14f6d4aa1c1db9b533b96-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-68\" (UID: \"d90eefe7f7a14f6d4aa1c1db9b533b96\") " pod="kube-system/kube-apiserver-ip-172-31-28-68" Sep 10 23:49:45.031470 kubelet[3597]: I0910 23:49:45.031232 3597 apiserver.go:52] "Watching apiserver" Sep 10 23:49:45.104199 kubelet[3597]: I0910 23:49:45.104117 3597 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:49:45.231555 kubelet[3597]: I0910 23:49:45.231362 3597 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-68" Sep 10 23:49:45.245371 kubelet[3597]: E0910 23:49:45.245275 3597 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-68\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-68" Sep 10 23:49:45.303637 kubelet[3597]: I0910 23:49:45.302948 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-68" podStartSLOduration=1.302927082 podStartE2EDuration="1.302927082s" podCreationTimestamp="2025-09-10 23:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:49:45.285657786 +0000 UTC m=+1.421168168" watchObservedRunningTime="2025-09-10 23:49:45.302927082 +0000 UTC m=+1.438437440" Sep 10 23:49:45.303637 kubelet[3597]: I0910 23:49:45.303116 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-68" podStartSLOduration=1.303106002 podStartE2EDuration="1.303106002s" podCreationTimestamp="2025-09-10 23:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:49:45.302645022 +0000 UTC m=+1.438155380" watchObservedRunningTime="2025-09-10 23:49:45.303106002 +0000 UTC m=+1.438616348" Sep 10 23:49:45.347224 kubelet[3597]: I0910 23:49:45.347021 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-68" podStartSLOduration=1.346998271 podStartE2EDuration="1.346998271s" podCreationTimestamp="2025-09-10 23:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:49:45.319556946 +0000 UTC m=+1.455067496" watchObservedRunningTime="2025-09-10 23:49:45.346998271 +0000 UTC m=+1.482508617" Sep 10 23:49:48.201436 kubelet[3597]: I0910 23:49:48.200668 3597 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 23:49:48.202152 containerd[2003]: time="2025-09-10T23:49:48.201139977Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 23:49:48.204760 kubelet[3597]: I0910 23:49:48.204706 3597 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 23:49:49.243010 kubelet[3597]: I0910 23:49:49.242934 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl9qv\" (UniqueName: \"kubernetes.io/projected/b98b91a6-f7de-497c-af19-addcc67d41d9-kube-api-access-xl9qv\") pod \"kube-proxy-d4l9g\" (UID: \"b98b91a6-f7de-497c-af19-addcc67d41d9\") " pod="kube-system/kube-proxy-d4l9g" Sep 10 23:49:49.243972 kubelet[3597]: I0910 23:49:49.243623 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b98b91a6-f7de-497c-af19-addcc67d41d9-kube-proxy\") pod \"kube-proxy-d4l9g\" (UID: \"b98b91a6-f7de-497c-af19-addcc67d41d9\") " pod="kube-system/kube-proxy-d4l9g" Sep 10 23:49:49.243972 kubelet[3597]: I0910 23:49:49.243724 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b98b91a6-f7de-497c-af19-addcc67d41d9-xtables-lock\") pod \"kube-proxy-d4l9g\" (UID: \"b98b91a6-f7de-497c-af19-addcc67d41d9\") " pod="kube-system/kube-proxy-d4l9g" Sep 10 23:49:49.243972 kubelet[3597]: I0910 23:49:49.243787 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b98b91a6-f7de-497c-af19-addcc67d41d9-lib-modules\") pod \"kube-proxy-d4l9g\" (UID: \"b98b91a6-f7de-497c-af19-addcc67d41d9\") " pod="kube-system/kube-proxy-d4l9g" Sep 10 23:49:49.246219 systemd[1]: Created slice kubepods-besteffort-podb98b91a6_f7de_497c_af19_addcc67d41d9.slice - libcontainer container kubepods-besteffort-podb98b91a6_f7de_497c_af19_addcc67d41d9.slice. Sep 10 23:49:49.390818 systemd[1]: Created slice kubepods-besteffort-pod0924b086_6370_4062_b4f3_4db68f99de1f.slice - libcontainer container kubepods-besteffort-pod0924b086_6370_4062_b4f3_4db68f99de1f.slice. Sep 10 23:49:49.445514 kubelet[3597]: I0910 23:49:49.445462 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0924b086-6370-4062-b4f3-4db68f99de1f-var-lib-calico\") pod \"tigera-operator-755d956888-4gc74\" (UID: \"0924b086-6370-4062-b4f3-4db68f99de1f\") " pod="tigera-operator/tigera-operator-755d956888-4gc74" Sep 10 23:49:49.445725 kubelet[3597]: I0910 23:49:49.445700 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgk6d\" (UniqueName: \"kubernetes.io/projected/0924b086-6370-4062-b4f3-4db68f99de1f-kube-api-access-hgk6d\") pod \"tigera-operator-755d956888-4gc74\" (UID: \"0924b086-6370-4062-b4f3-4db68f99de1f\") " pod="tigera-operator/tigera-operator-755d956888-4gc74" Sep 10 23:49:49.562294 containerd[2003]: time="2025-09-10T23:49:49.561641183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4l9g,Uid:b98b91a6-f7de-497c-af19-addcc67d41d9,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:49.600295 containerd[2003]: time="2025-09-10T23:49:49.600195084Z" level=info msg="connecting to shim e3f4f135ff316f201b377c039cd50999b1fa53b8d1e273c0dc35b03259bb6a57" address="unix:///run/containerd/s/f7c090c73cb0cb672c03ae8b9d15122270087bc0b989e726d71b96991afb6e36" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:49.652573 systemd[1]: Started cri-containerd-e3f4f135ff316f201b377c039cd50999b1fa53b8d1e273c0dc35b03259bb6a57.scope - libcontainer container e3f4f135ff316f201b377c039cd50999b1fa53b8d1e273c0dc35b03259bb6a57. Sep 10 23:49:49.703886 containerd[2003]: time="2025-09-10T23:49:49.703485252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-4gc74,Uid:0924b086-6370-4062-b4f3-4db68f99de1f,Namespace:tigera-operator,Attempt:0,}" Sep 10 23:49:49.710660 containerd[2003]: time="2025-09-10T23:49:49.710528640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4l9g,Uid:b98b91a6-f7de-497c-af19-addcc67d41d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3f4f135ff316f201b377c039cd50999b1fa53b8d1e273c0dc35b03259bb6a57\"" Sep 10 23:49:49.718961 containerd[2003]: time="2025-09-10T23:49:49.718893672Z" level=info msg="CreateContainer within sandbox \"e3f4f135ff316f201b377c039cd50999b1fa53b8d1e273c0dc35b03259bb6a57\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 23:49:49.765659 containerd[2003]: time="2025-09-10T23:49:49.765577116Z" level=info msg="Container 15525b0600727d94444b77f67df5578eaf327dda133792015ea448404339e136: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:49.773996 containerd[2003]: time="2025-09-10T23:49:49.773896873Z" level=info msg="connecting to shim ea84c91bb50f5847f52238848e43b7fe2bdbe10c07e0f763d6a9c0afd6b8692a" address="unix:///run/containerd/s/f3b6752b068aea0fb63def2a131a1c0da4170c250323cfbce22a4c49a9736986" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:49.789141 containerd[2003]: time="2025-09-10T23:49:49.789068197Z" level=info msg="CreateContainer within sandbox \"e3f4f135ff316f201b377c039cd50999b1fa53b8d1e273c0dc35b03259bb6a57\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"15525b0600727d94444b77f67df5578eaf327dda133792015ea448404339e136\"" Sep 10 23:49:49.792520 containerd[2003]: time="2025-09-10T23:49:49.792450685Z" level=info msg="StartContainer for \"15525b0600727d94444b77f67df5578eaf327dda133792015ea448404339e136\"" Sep 10 23:49:49.801771 containerd[2003]: time="2025-09-10T23:49:49.801717265Z" level=info msg="connecting to shim 15525b0600727d94444b77f67df5578eaf327dda133792015ea448404339e136" address="unix:///run/containerd/s/f7c090c73cb0cb672c03ae8b9d15122270087bc0b989e726d71b96991afb6e36" protocol=ttrpc version=3 Sep 10 23:49:49.833662 systemd[1]: Started cri-containerd-ea84c91bb50f5847f52238848e43b7fe2bdbe10c07e0f763d6a9c0afd6b8692a.scope - libcontainer container ea84c91bb50f5847f52238848e43b7fe2bdbe10c07e0f763d6a9c0afd6b8692a. Sep 10 23:49:49.870571 systemd[1]: Started cri-containerd-15525b0600727d94444b77f67df5578eaf327dda133792015ea448404339e136.scope - libcontainer container 15525b0600727d94444b77f67df5578eaf327dda133792015ea448404339e136. Sep 10 23:49:49.978664 containerd[2003]: time="2025-09-10T23:49:49.978613754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-4gc74,Uid:0924b086-6370-4062-b4f3-4db68f99de1f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ea84c91bb50f5847f52238848e43b7fe2bdbe10c07e0f763d6a9c0afd6b8692a\"" Sep 10 23:49:49.983045 containerd[2003]: time="2025-09-10T23:49:49.982816406Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 23:49:50.000218 containerd[2003]: time="2025-09-10T23:49:50.000131458Z" level=info msg="StartContainer for \"15525b0600727d94444b77f67df5578eaf327dda133792015ea448404339e136\" returns successfully" Sep 10 23:49:51.483886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923391586.mount: Deactivated successfully. Sep 10 23:49:52.219076 containerd[2003]: time="2025-09-10T23:49:52.219011569Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:52.220504 containerd[2003]: time="2025-09-10T23:49:52.220372897Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 10 23:49:52.222298 containerd[2003]: time="2025-09-10T23:49:52.221375485Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:52.227987 containerd[2003]: time="2025-09-10T23:49:52.227926585Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:52.231744 containerd[2003]: time="2025-09-10T23:49:52.231674305Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.248797119s" Sep 10 23:49:52.231744 containerd[2003]: time="2025-09-10T23:49:52.231739465Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 10 23:49:52.239911 containerd[2003]: time="2025-09-10T23:49:52.239730277Z" level=info msg="CreateContainer within sandbox \"ea84c91bb50f5847f52238848e43b7fe2bdbe10c07e0f763d6a9c0afd6b8692a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 23:49:52.254674 containerd[2003]: time="2025-09-10T23:49:52.254609641Z" level=info msg="Container cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:52.271200 containerd[2003]: time="2025-09-10T23:49:52.271118125Z" level=info msg="CreateContainer within sandbox \"ea84c91bb50f5847f52238848e43b7fe2bdbe10c07e0f763d6a9c0afd6b8692a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2\"" Sep 10 23:49:52.272649 containerd[2003]: time="2025-09-10T23:49:52.272592313Z" level=info msg="StartContainer for \"cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2\"" Sep 10 23:49:52.274896 containerd[2003]: time="2025-09-10T23:49:52.274809385Z" level=info msg="connecting to shim cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2" address="unix:///run/containerd/s/f3b6752b068aea0fb63def2a131a1c0da4170c250323cfbce22a4c49a9736986" protocol=ttrpc version=3 Sep 10 23:49:52.320588 systemd[1]: Started cri-containerd-cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2.scope - libcontainer container cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2. Sep 10 23:49:52.379034 containerd[2003]: time="2025-09-10T23:49:52.378885253Z" level=info msg="StartContainer for \"cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2\" returns successfully" Sep 10 23:49:53.306710 kubelet[3597]: I0910 23:49:53.306414 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d4l9g" podStartSLOduration=4.30629201 podStartE2EDuration="4.30629201s" podCreationTimestamp="2025-09-10 23:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:49:50.299390771 +0000 UTC m=+6.434901177" watchObservedRunningTime="2025-09-10 23:49:53.30629201 +0000 UTC m=+9.441802356" Sep 10 23:49:53.308380 kubelet[3597]: I0910 23:49:53.307956 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-4gc74" podStartSLOduration=2.055423847 podStartE2EDuration="4.307533842s" podCreationTimestamp="2025-09-10 23:49:49 +0000 UTC" firstStartedPulling="2025-09-10 23:49:49.981169562 +0000 UTC m=+6.116679908" lastFinishedPulling="2025-09-10 23:49:52.233279557 +0000 UTC m=+8.368789903" observedRunningTime="2025-09-10 23:49:53.306222818 +0000 UTC m=+9.441733188" watchObservedRunningTime="2025-09-10 23:49:53.307533842 +0000 UTC m=+9.443044188" Sep 10 23:50:01.180157 sudo[2356]: pam_unix(sudo:session): session closed for user root Sep 10 23:50:01.205386 sshd[2355]: Connection closed by 139.178.68.195 port 59262 Sep 10 23:50:01.208593 sshd-session[2353]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:01.216653 systemd[1]: sshd@8-172.31.28.68:22-139.178.68.195:59262.service: Deactivated successfully. Sep 10 23:50:01.227125 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 23:50:01.229433 systemd[1]: session-9.scope: Consumed 12.058s CPU time, 233.9M memory peak. Sep 10 23:50:01.233936 systemd-logind[1975]: Session 9 logged out. Waiting for processes to exit. Sep 10 23:50:01.243937 systemd-logind[1975]: Removed session 9. Sep 10 23:50:13.132243 kubelet[3597]: W0910 23:50:13.131181 3597 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-28-68" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-28-68' and this object Sep 10 23:50:13.132243 kubelet[3597]: E0910 23:50:13.131740 3597 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-28-68\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-68' and this object" logger="UnhandledError" Sep 10 23:50:13.132243 kubelet[3597]: W0910 23:50:13.131898 3597 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-28-68" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-28-68' and this object Sep 10 23:50:13.132243 kubelet[3597]: E0910 23:50:13.131930 3597 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-28-68\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-68' and this object" logger="UnhandledError" Sep 10 23:50:13.132243 kubelet[3597]: W0910 23:50:13.131996 3597 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-28-68" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-28-68' and this object Sep 10 23:50:13.135844 kubelet[3597]: E0910 23:50:13.132022 3597 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ip-172-31-28-68\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-68' and this object" logger="UnhandledError" Sep 10 23:50:13.135844 kubelet[3597]: I0910 23:50:13.132077 3597 status_manager.go:890] "Failed to get status for pod" podUID="84bf1069-e5fb-4c25-847b-4fca53b7785f" pod="calico-system/calico-typha-7c86b9654b-2fqb5" err="pods \"calico-typha-7c86b9654b-2fqb5\" is forbidden: User \"system:node:ip-172-31-28-68\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-68' and this object" Sep 10 23:50:13.134811 systemd[1]: Created slice kubepods-besteffort-pod84bf1069_e5fb_4c25_847b_4fca53b7785f.slice - libcontainer container kubepods-besteffort-pod84bf1069_e5fb_4c25_847b_4fca53b7785f.slice. Sep 10 23:50:13.212668 kubelet[3597]: I0910 23:50:13.212597 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/84bf1069-e5fb-4c25-847b-4fca53b7785f-typha-certs\") pod \"calico-typha-7c86b9654b-2fqb5\" (UID: \"84bf1069-e5fb-4c25-847b-4fca53b7785f\") " pod="calico-system/calico-typha-7c86b9654b-2fqb5" Sep 10 23:50:13.212856 kubelet[3597]: I0910 23:50:13.212677 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpm4r\" (UniqueName: \"kubernetes.io/projected/84bf1069-e5fb-4c25-847b-4fca53b7785f-kube-api-access-mpm4r\") pod \"calico-typha-7c86b9654b-2fqb5\" (UID: \"84bf1069-e5fb-4c25-847b-4fca53b7785f\") " pod="calico-system/calico-typha-7c86b9654b-2fqb5" Sep 10 23:50:13.212856 kubelet[3597]: I0910 23:50:13.212721 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84bf1069-e5fb-4c25-847b-4fca53b7785f-tigera-ca-bundle\") pod \"calico-typha-7c86b9654b-2fqb5\" (UID: \"84bf1069-e5fb-4c25-847b-4fca53b7785f\") " pod="calico-system/calico-typha-7c86b9654b-2fqb5" Sep 10 23:50:13.684371 systemd[1]: Created slice kubepods-besteffort-pod53ba11cf_c601_4b18_a752_492dd9839022.slice - libcontainer container kubepods-besteffort-pod53ba11cf_c601_4b18_a752_492dd9839022.slice. Sep 10 23:50:13.716764 kubelet[3597]: I0910 23:50:13.716676 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/53ba11cf-c601-4b18-a752-492dd9839022-cni-net-dir\") pod \"calico-node-2bhvm\" (UID: \"53ba11cf-c601-4b18-a752-492dd9839022\") " pod="calico-system/calico-node-2bhvm" Sep 10 23:50:13.716764 kubelet[3597]: I0910 23:50:13.716766 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53ba11cf-c601-4b18-a752-492dd9839022-lib-modules\") pod \"calico-node-2bhvm\" (UID: \"53ba11cf-c601-4b18-a752-492dd9839022\") " pod="calico-system/calico-node-2bhvm" Sep 10 23:50:13.717020 kubelet[3597]: I0910 23:50:13.716827 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/53ba11cf-c601-4b18-a752-492dd9839022-policysync\") pod \"calico-node-2bhvm\" (UID: \"53ba11cf-c601-4b18-a752-492dd9839022\") " pod="calico-system/calico-node-2bhvm" Sep 10 23:50:13.717449 kubelet[3597]: I0910 23:50:13.717200 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/53ba11cf-c601-4b18-a752-492dd9839022-cni-log-dir\") pod \"calico-node-2bhvm\" (UID: \"53ba11cf-c601-4b18-a752-492dd9839022\") " pod="calico-system/calico-node-2bhvm" Sep 10 23:50:13.717449 kubelet[3597]: I0910 23:50:13.717296 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/53ba11cf-c601-4b18-a752-492dd9839022-var-run-calico\") pod \"calico-node-2bhvm\" (UID: \"53ba11cf-c601-4b18-a752-492dd9839022\") " pod="calico-system/calico-node-2bhvm" Sep 10 23:50:13.717449 kubelet[3597]: I0910 23:50:13.717382 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/53ba11cf-c601-4b18-a752-492dd9839022-var-lib-calico\") pod \"calico-node-2bhvm\" (UID: \"53ba11cf-c601-4b18-a752-492dd9839022\") " pod="calico-system/calico-node-2bhvm" Sep 10 23:50:13.719573 kubelet[3597]: I0910 23:50:13.717558 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/53ba11cf-c601-4b18-a752-492dd9839022-flexvol-driver-host\") pod \"calico-node-2bhvm\" (UID: \"53ba11cf-c601-4b18-a752-492dd9839022\") " pod="calico-system/calico-node-2bhvm" Sep 10 23:50:13.719573 kubelet[3597]: I0910 23:50:13.717604 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhhkz\" (UniqueName: \"kubernetes.io/projected/53ba11cf-c601-4b18-a752-492dd9839022-kube-api-access-qhhkz\") pod \"calico-node-2bhvm\" (UID: \"53ba11cf-c601-4b18-a752-492dd9839022\") " pod="calico-system/calico-node-2bhvm" Sep 10 23:50:13.719573 kubelet[3597]: I0910 23:50:13.717649 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/53ba11cf-c601-4b18-a752-492dd9839022-cni-bin-dir\") pod \"calico-node-2bhvm\" (UID: \"53ba11cf-c601-4b18-a752-492dd9839022\") " pod="calico-system/calico-node-2bhvm" Sep 10 23:50:13.719573 kubelet[3597]: I0910 23:50:13.717687 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53ba11cf-c601-4b18-a752-492dd9839022-xtables-lock\") pod \"calico-node-2bhvm\" (UID: \"53ba11cf-c601-4b18-a752-492dd9839022\") " pod="calico-system/calico-node-2bhvm" Sep 10 23:50:13.719573 kubelet[3597]: I0910 23:50:13.717734 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/53ba11cf-c601-4b18-a752-492dd9839022-node-certs\") pod \"calico-node-2bhvm\" (UID: \"53ba11cf-c601-4b18-a752-492dd9839022\") " pod="calico-system/calico-node-2bhvm" Sep 10 23:50:13.719861 kubelet[3597]: I0910 23:50:13.717769 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53ba11cf-c601-4b18-a752-492dd9839022-tigera-ca-bundle\") pod \"calico-node-2bhvm\" (UID: \"53ba11cf-c601-4b18-a752-492dd9839022\") " pod="calico-system/calico-node-2bhvm" Sep 10 23:50:13.823717 kubelet[3597]: E0910 23:50:13.823665 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.826317 kubelet[3597]: W0910 23:50:13.823957 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.826317 kubelet[3597]: E0910 23:50:13.824421 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.827611 kubelet[3597]: E0910 23:50:13.827551 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.828723 kubelet[3597]: W0910 23:50:13.828673 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.830504 kubelet[3597]: E0910 23:50:13.829152 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.834316 kubelet[3597]: E0910 23:50:13.833627 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.834316 kubelet[3597]: W0910 23:50:13.833679 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.834316 kubelet[3597]: E0910 23:50:13.833801 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.838464 kubelet[3597]: E0910 23:50:13.836059 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.838464 kubelet[3597]: W0910 23:50:13.836116 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.838464 kubelet[3597]: E0910 23:50:13.836154 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.838897 kubelet[3597]: E0910 23:50:13.838838 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.838897 kubelet[3597]: W0910 23:50:13.838886 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.839084 kubelet[3597]: E0910 23:50:13.838926 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.873664 kubelet[3597]: E0910 23:50:13.873590 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdxn4" podUID="136f9386-4257-4e17-9cdf-6a6536307346" Sep 10 23:50:13.884493 kubelet[3597]: E0910 23:50:13.884431 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.884493 kubelet[3597]: W0910 23:50:13.884479 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.884729 kubelet[3597]: E0910 23:50:13.884516 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.885675 kubelet[3597]: E0910 23:50:13.885598 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.887461 kubelet[3597]: W0910 23:50:13.887315 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.887461 kubelet[3597]: E0910 23:50:13.887460 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.888531 kubelet[3597]: E0910 23:50:13.888458 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.888531 kubelet[3597]: W0910 23:50:13.888500 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.888755 kubelet[3597]: E0910 23:50:13.888552 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.889052 kubelet[3597]: E0910 23:50:13.888963 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.889052 kubelet[3597]: W0910 23:50:13.889007 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.889052 kubelet[3597]: E0910 23:50:13.889042 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.890622 kubelet[3597]: E0910 23:50:13.890563 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.890622 kubelet[3597]: W0910 23:50:13.890609 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.890859 kubelet[3597]: E0910 23:50:13.890646 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.892483 kubelet[3597]: E0910 23:50:13.892421 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.892483 kubelet[3597]: W0910 23:50:13.892468 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.892936 kubelet[3597]: E0910 23:50:13.892506 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.893375 kubelet[3597]: E0910 23:50:13.893319 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.893375 kubelet[3597]: W0910 23:50:13.893366 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.894407 kubelet[3597]: E0910 23:50:13.893401 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.895498 kubelet[3597]: E0910 23:50:13.895456 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.895677 kubelet[3597]: W0910 23:50:13.895649 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.895847 kubelet[3597]: E0910 23:50:13.895819 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.897725 kubelet[3597]: E0910 23:50:13.896460 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.897725 kubelet[3597]: W0910 23:50:13.897410 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.897725 kubelet[3597]: E0910 23:50:13.897453 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.898173 kubelet[3597]: E0910 23:50:13.898140 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.898348 kubelet[3597]: W0910 23:50:13.898318 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.899298 kubelet[3597]: E0910 23:50:13.898483 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.899644 kubelet[3597]: E0910 23:50:13.899612 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.899976 kubelet[3597]: W0910 23:50:13.899844 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.900281 kubelet[3597]: E0910 23:50:13.900184 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.901945 kubelet[3597]: E0910 23:50:13.901636 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.901945 kubelet[3597]: W0910 23:50:13.901673 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.901945 kubelet[3597]: E0910 23:50:13.901705 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.903582 kubelet[3597]: E0910 23:50:13.903538 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.903809 kubelet[3597]: W0910 23:50:13.903777 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.903994 kubelet[3597]: E0910 23:50:13.903960 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.907551 kubelet[3597]: E0910 23:50:13.907503 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.908204 kubelet[3597]: W0910 23:50:13.907754 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.908204 kubelet[3597]: E0910 23:50:13.907805 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.909385 kubelet[3597]: E0910 23:50:13.908747 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.909714 kubelet[3597]: W0910 23:50:13.909668 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.909864 kubelet[3597]: E0910 23:50:13.909836 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.912302 kubelet[3597]: E0910 23:50:13.910545 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.912302 kubelet[3597]: W0910 23:50:13.910590 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.912302 kubelet[3597]: E0910 23:50:13.910633 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.912924 kubelet[3597]: E0910 23:50:13.912878 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.913105 kubelet[3597]: W0910 23:50:13.913072 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.913240 kubelet[3597]: E0910 23:50:13.913211 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.913888 kubelet[3597]: E0910 23:50:13.913848 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.916308 kubelet[3597]: W0910 23:50:13.914075 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.916601 kubelet[3597]: E0910 23:50:13.916564 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.917754 kubelet[3597]: E0910 23:50:13.917432 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.917754 kubelet[3597]: W0910 23:50:13.917476 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.917754 kubelet[3597]: E0910 23:50:13.917512 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.918208 kubelet[3597]: E0910 23:50:13.918173 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.918405 kubelet[3597]: W0910 23:50:13.918369 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.918571 kubelet[3597]: E0910 23:50:13.918544 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.919540 kubelet[3597]: E0910 23:50:13.919433 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.919540 kubelet[3597]: W0910 23:50:13.919469 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.919540 kubelet[3597]: E0910 23:50:13.919500 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.920809 kubelet[3597]: I0910 23:50:13.920392 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/136f9386-4257-4e17-9cdf-6a6536307346-varrun\") pod \"csi-node-driver-wdxn4\" (UID: \"136f9386-4257-4e17-9cdf-6a6536307346\") " pod="calico-system/csi-node-driver-wdxn4" Sep 10 23:50:13.921324 kubelet[3597]: E0910 23:50:13.921115 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.921324 kubelet[3597]: W0910 23:50:13.921151 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.921324 kubelet[3597]: E0910 23:50:13.921205 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.922387 kubelet[3597]: E0910 23:50:13.922317 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.922387 kubelet[3597]: W0910 23:50:13.922365 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.922682 kubelet[3597]: E0910 23:50:13.922419 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.922682 kubelet[3597]: I0910 23:50:13.922472 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/136f9386-4257-4e17-9cdf-6a6536307346-kubelet-dir\") pod \"csi-node-driver-wdxn4\" (UID: \"136f9386-4257-4e17-9cdf-6a6536307346\") " pod="calico-system/csi-node-driver-wdxn4" Sep 10 23:50:13.923936 kubelet[3597]: E0910 23:50:13.923682 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.923936 kubelet[3597]: W0910 23:50:13.923725 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.923936 kubelet[3597]: E0910 23:50:13.923761 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.924934 kubelet[3597]: E0910 23:50:13.924832 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.924934 kubelet[3597]: W0910 23:50:13.924878 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.924934 kubelet[3597]: E0910 23:50:13.924928 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.925710 kubelet[3597]: E0910 23:50:13.925397 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.925710 kubelet[3597]: W0910 23:50:13.925424 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.925710 kubelet[3597]: E0910 23:50:13.925489 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.926374 kubelet[3597]: E0910 23:50:13.925910 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.926374 kubelet[3597]: W0910 23:50:13.925947 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.926374 kubelet[3597]: E0910 23:50:13.925977 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.926374 kubelet[3597]: I0910 23:50:13.926022 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/136f9386-4257-4e17-9cdf-6a6536307346-registration-dir\") pod \"csi-node-driver-wdxn4\" (UID: \"136f9386-4257-4e17-9cdf-6a6536307346\") " pod="calico-system/csi-node-driver-wdxn4" Sep 10 23:50:13.927309 kubelet[3597]: E0910 23:50:13.927112 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.927309 kubelet[3597]: W0910 23:50:13.927161 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.927309 kubelet[3597]: E0910 23:50:13.927211 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.928908 kubelet[3597]: I0910 23:50:13.927284 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/136f9386-4257-4e17-9cdf-6a6536307346-socket-dir\") pod \"csi-node-driver-wdxn4\" (UID: \"136f9386-4257-4e17-9cdf-6a6536307346\") " pod="calico-system/csi-node-driver-wdxn4" Sep 10 23:50:13.928908 kubelet[3597]: E0910 23:50:13.928479 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.928908 kubelet[3597]: W0910 23:50:13.928517 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.928908 kubelet[3597]: E0910 23:50:13.928593 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.928908 kubelet[3597]: I0910 23:50:13.928679 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj7p7\" (UniqueName: \"kubernetes.io/projected/136f9386-4257-4e17-9cdf-6a6536307346-kube-api-access-pj7p7\") pod \"csi-node-driver-wdxn4\" (UID: \"136f9386-4257-4e17-9cdf-6a6536307346\") " pod="calico-system/csi-node-driver-wdxn4" Sep 10 23:50:13.929163 kubelet[3597]: E0910 23:50:13.928936 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.929163 kubelet[3597]: W0910 23:50:13.928956 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.929163 kubelet[3597]: E0910 23:50:13.929002 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.930167 kubelet[3597]: E0910 23:50:13.929441 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.930167 kubelet[3597]: W0910 23:50:13.929596 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.930167 kubelet[3597]: E0910 23:50:13.929647 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.930869 kubelet[3597]: E0910 23:50:13.930553 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.930869 kubelet[3597]: W0910 23:50:13.930582 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.930869 kubelet[3597]: E0910 23:50:13.930631 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.931651 kubelet[3597]: E0910 23:50:13.931598 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.931651 kubelet[3597]: W0910 23:50:13.931639 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.932286 kubelet[3597]: E0910 23:50:13.931673 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.933540 kubelet[3597]: E0910 23:50:13.933481 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.933540 kubelet[3597]: W0910 23:50:13.933527 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.933862 kubelet[3597]: E0910 23:50:13.933562 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:13.934020 kubelet[3597]: E0910 23:50:13.933979 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:13.934091 kubelet[3597]: W0910 23:50:13.934021 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:13.934091 kubelet[3597]: E0910 23:50:13.934053 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.030126 kubelet[3597]: E0910 23:50:14.029889 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.030126 kubelet[3597]: W0910 23:50:14.029926 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.030126 kubelet[3597]: E0910 23:50:14.029958 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.032498 kubelet[3597]: E0910 23:50:14.032136 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.032498 kubelet[3597]: W0910 23:50:14.032173 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.032498 kubelet[3597]: E0910 23:50:14.032227 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.033538 kubelet[3597]: E0910 23:50:14.033493 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.033538 kubelet[3597]: W0910 23:50:14.033530 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.034002 kubelet[3597]: E0910 23:50:14.033578 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.034605 kubelet[3597]: E0910 23:50:14.034562 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.034605 kubelet[3597]: W0910 23:50:14.034598 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.035513 kubelet[3597]: E0910 23:50:14.035362 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.036570 kubelet[3597]: E0910 23:50:14.036505 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.036570 kubelet[3597]: W0910 23:50:14.036541 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.039049 kubelet[3597]: E0910 23:50:14.038998 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.039953 kubelet[3597]: E0910 23:50:14.039527 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.039953 kubelet[3597]: W0910 23:50:14.039563 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.039953 kubelet[3597]: E0910 23:50:14.039895 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.039953 kubelet[3597]: W0910 23:50:14.039913 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.040225 kubelet[3597]: E0910 23:50:14.040206 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.040331 kubelet[3597]: W0910 23:50:14.040228 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.040331 kubelet[3597]: E0910 23:50:14.040313 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.041072 kubelet[3597]: E0910 23:50:14.040676 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.041072 kubelet[3597]: W0910 23:50:14.040708 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.041072 kubelet[3597]: E0910 23:50:14.040734 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.041072 kubelet[3597]: E0910 23:50:14.040819 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.041072 kubelet[3597]: E0910 23:50:14.041021 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.042702 kubelet[3597]: E0910 23:50:14.041755 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.042702 kubelet[3597]: W0910 23:50:14.041784 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.042702 kubelet[3597]: E0910 23:50:14.041829 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.042702 kubelet[3597]: E0910 23:50:14.042190 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.042702 kubelet[3597]: W0910 23:50:14.042212 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.042702 kubelet[3597]: E0910 23:50:14.042238 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.045529 kubelet[3597]: E0910 23:50:14.043546 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.045529 kubelet[3597]: W0910 23:50:14.043583 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.045529 kubelet[3597]: E0910 23:50:14.043642 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.045529 kubelet[3597]: E0910 23:50:14.044017 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.045529 kubelet[3597]: W0910 23:50:14.044035 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.045529 kubelet[3597]: E0910 23:50:14.044069 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.045529 kubelet[3597]: E0910 23:50:14.044418 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.045529 kubelet[3597]: W0910 23:50:14.044440 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.045529 kubelet[3597]: E0910 23:50:14.044686 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.045529 kubelet[3597]: W0910 23:50:14.044703 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.046350 kubelet[3597]: E0910 23:50:14.044755 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.046350 kubelet[3597]: E0910 23:50:14.044808 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.046622 kubelet[3597]: E0910 23:50:14.046576 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.046732 kubelet[3597]: W0910 23:50:14.046617 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.046879 kubelet[3597]: E0910 23:50:14.046799 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.047050 kubelet[3597]: E0910 23:50:14.047015 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.047050 kubelet[3597]: W0910 23:50:14.047044 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.047546 kubelet[3597]: E0910 23:50:14.047157 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.047546 kubelet[3597]: E0910 23:50:14.047393 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.047546 kubelet[3597]: W0910 23:50:14.047412 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.048690 kubelet[3597]: E0910 23:50:14.047733 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.048690 kubelet[3597]: W0910 23:50:14.047766 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.048690 kubelet[3597]: E0910 23:50:14.048046 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.048690 kubelet[3597]: W0910 23:50:14.048063 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.048690 kubelet[3597]: E0910 23:50:14.048092 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.048690 kubelet[3597]: E0910 23:50:14.048409 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.048690 kubelet[3597]: W0910 23:50:14.048429 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.048690 kubelet[3597]: E0910 23:50:14.048452 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.048690 kubelet[3597]: E0910 23:50:14.048497 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.048690 kubelet[3597]: E0910 23:50:14.048616 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.049583 kubelet[3597]: E0910 23:50:14.049541 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.049583 kubelet[3597]: W0910 23:50:14.049579 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.051539 kubelet[3597]: E0910 23:50:14.049627 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.051539 kubelet[3597]: E0910 23:50:14.050220 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.051539 kubelet[3597]: W0910 23:50:14.050246 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.051539 kubelet[3597]: E0910 23:50:14.051409 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.051806 kubelet[3597]: E0910 23:50:14.051791 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.051858 kubelet[3597]: W0910 23:50:14.051812 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.051858 kubelet[3597]: E0910 23:50:14.051835 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.052478 kubelet[3597]: E0910 23:50:14.052432 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.052478 kubelet[3597]: W0910 23:50:14.052467 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.052683 kubelet[3597]: E0910 23:50:14.052496 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.154483 kubelet[3597]: E0910 23:50:14.154218 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.159003 kubelet[3597]: W0910 23:50:14.154684 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.159003 kubelet[3597]: E0910 23:50:14.155828 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.184657 kubelet[3597]: E0910 23:50:14.184588 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.184657 kubelet[3597]: W0910 23:50:14.184640 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.184934 kubelet[3597]: E0910 23:50:14.184679 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.208190 kubelet[3597]: E0910 23:50:14.207781 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.208190 kubelet[3597]: W0910 23:50:14.208147 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.212235 kubelet[3597]: E0910 23:50:14.211074 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.314625 kubelet[3597]: E0910 23:50:14.314229 3597 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Sep 10 23:50:14.316758 kubelet[3597]: E0910 23:50:14.316710 3597 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84bf1069-e5fb-4c25-847b-4fca53b7785f-typha-certs podName:84bf1069-e5fb-4c25-847b-4fca53b7785f nodeName:}" failed. No retries permitted until 2025-09-10 23:50:14.816668022 +0000 UTC m=+30.952178380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/84bf1069-e5fb-4c25-847b-4fca53b7785f-typha-certs") pod "calico-typha-7c86b9654b-2fqb5" (UID: "84bf1069-e5fb-4c25-847b-4fca53b7785f") : failed to sync secret cache: timed out waiting for the condition Sep 10 23:50:14.320430 kubelet[3597]: E0910 23:50:14.317950 3597 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 10 23:50:14.320430 kubelet[3597]: E0910 23:50:14.319613 3597 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84bf1069-e5fb-4c25-847b-4fca53b7785f-tigera-ca-bundle podName:84bf1069-e5fb-4c25-847b-4fca53b7785f nodeName:}" failed. No retries permitted until 2025-09-10 23:50:14.819579102 +0000 UTC m=+30.955089448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/84bf1069-e5fb-4c25-847b-4fca53b7785f-tigera-ca-bundle") pod "calico-typha-7c86b9654b-2fqb5" (UID: "84bf1069-e5fb-4c25-847b-4fca53b7785f") : failed to sync configmap cache: timed out waiting for the condition Sep 10 23:50:14.344418 kubelet[3597]: E0910 23:50:14.344373 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.344622 kubelet[3597]: W0910 23:50:14.344592 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.345503 kubelet[3597]: E0910 23:50:14.344855 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.346940 kubelet[3597]: E0910 23:50:14.346896 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.347586 kubelet[3597]: W0910 23:50:14.347121 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.347586 kubelet[3597]: E0910 23:50:14.347481 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.448433 kubelet[3597]: E0910 23:50:14.448378 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.448433 kubelet[3597]: W0910 23:50:14.448419 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.448433 kubelet[3597]: E0910 23:50:14.448453 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.449516 kubelet[3597]: E0910 23:50:14.449469 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.449516 kubelet[3597]: W0910 23:50:14.449508 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.450120 kubelet[3597]: E0910 23:50:14.449541 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.555674 kubelet[3597]: E0910 23:50:14.554734 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.555674 kubelet[3597]: W0910 23:50:14.554776 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.555674 kubelet[3597]: E0910 23:50:14.554813 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.556474 kubelet[3597]: E0910 23:50:14.556336 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.557187 kubelet[3597]: W0910 23:50:14.556822 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.557187 kubelet[3597]: E0910 23:50:14.556990 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.658809 kubelet[3597]: E0910 23:50:14.658654 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.658809 kubelet[3597]: W0910 23:50:14.658703 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.658809 kubelet[3597]: E0910 23:50:14.658738 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.660287 kubelet[3597]: E0910 23:50:14.660216 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.660946 kubelet[3597]: W0910 23:50:14.660859 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.660946 kubelet[3597]: E0910 23:50:14.660928 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.763599 kubelet[3597]: E0910 23:50:14.763543 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.763599 kubelet[3597]: W0910 23:50:14.763588 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.764130 kubelet[3597]: E0910 23:50:14.763635 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.764884 kubelet[3597]: E0910 23:50:14.764806 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.764884 kubelet[3597]: W0910 23:50:14.764873 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.765089 kubelet[3597]: E0910 23:50:14.764932 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.766079 kubelet[3597]: E0910 23:50:14.766036 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.766079 kubelet[3597]: W0910 23:50:14.766074 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.766377 kubelet[3597]: E0910 23:50:14.766108 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.868982 kubelet[3597]: E0910 23:50:14.868802 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.868982 kubelet[3597]: W0910 23:50:14.868966 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.869377 kubelet[3597]: E0910 23:50:14.869218 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.870027 kubelet[3597]: E0910 23:50:14.869995 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.870539 kubelet[3597]: W0910 23:50:14.870159 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.870539 kubelet[3597]: E0910 23:50:14.870319 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.870906 kubelet[3597]: E0910 23:50:14.870878 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.871027 kubelet[3597]: W0910 23:50:14.871001 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.871158 kubelet[3597]: E0910 23:50:14.871133 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.871931 kubelet[3597]: E0910 23:50:14.871877 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.872134 kubelet[3597]: W0910 23:50:14.872038 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.872853 kubelet[3597]: E0910 23:50:14.872227 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.873409 kubelet[3597]: E0910 23:50:14.873359 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.874164 kubelet[3597]: W0910 23:50:14.873600 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.874164 kubelet[3597]: E0910 23:50:14.873663 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.874719 kubelet[3597]: E0910 23:50:14.874687 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.874918 kubelet[3597]: W0910 23:50:14.874854 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.875283 kubelet[3597]: E0910 23:50:14.875120 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.876504 kubelet[3597]: E0910 23:50:14.876467 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.876996 kubelet[3597]: W0910 23:50:14.876687 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.876996 kubelet[3597]: E0910 23:50:14.876736 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.878464 kubelet[3597]: E0910 23:50:14.878423 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.879002 kubelet[3597]: W0910 23:50:14.878619 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.879002 kubelet[3597]: E0910 23:50:14.878662 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.881494 kubelet[3597]: E0910 23:50:14.881448 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.882048 kubelet[3597]: W0910 23:50:14.881716 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.882048 kubelet[3597]: E0910 23:50:14.881772 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.883490 kubelet[3597]: E0910 23:50:14.883449 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.883708 kubelet[3597]: W0910 23:50:14.883675 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.883865 kubelet[3597]: E0910 23:50:14.883837 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.889101 kubelet[3597]: E0910 23:50:14.888909 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.889101 kubelet[3597]: W0910 23:50:14.888949 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.889101 kubelet[3597]: E0910 23:50:14.888983 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.897630 kubelet[3597]: E0910 23:50:14.897575 3597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:50:14.897630 kubelet[3597]: W0910 23:50:14.897618 3597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:50:14.897971 kubelet[3597]: E0910 23:50:14.897652 3597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:50:14.900425 containerd[2003]: time="2025-09-10T23:50:14.898972465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2bhvm,Uid:53ba11cf-c601-4b18-a752-492dd9839022,Namespace:calico-system,Attempt:0,}" Sep 10 23:50:14.946050 containerd[2003]: time="2025-09-10T23:50:14.945859814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c86b9654b-2fqb5,Uid:84bf1069-e5fb-4c25-847b-4fca53b7785f,Namespace:calico-system,Attempt:0,}" Sep 10 23:50:14.982488 containerd[2003]: time="2025-09-10T23:50:14.981973862Z" level=info msg="connecting to shim add53c62883f5fb9ea69668a79fefd70d7ecfe756f5b96ae782910df9d7fc224" address="unix:///run/containerd/s/d37a6c9e70ffbcf9a36e731aec657cb1ef81b91d656d1c888a784052b2b97b1c" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:50:15.046956 containerd[2003]: time="2025-09-10T23:50:15.046562566Z" level=info msg="connecting to shim 23d4a94804724ca21856f06568d406278831f3f312a75c80d026f181bb996589" address="unix:///run/containerd/s/9225d59cee7486f44fedb5d9509a0710981736a086b3d6f2a13830903532bb81" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:50:15.075637 systemd[1]: Started cri-containerd-add53c62883f5fb9ea69668a79fefd70d7ecfe756f5b96ae782910df9d7fc224.scope - libcontainer container add53c62883f5fb9ea69668a79fefd70d7ecfe756f5b96ae782910df9d7fc224. Sep 10 23:50:15.155575 systemd[1]: Started cri-containerd-23d4a94804724ca21856f06568d406278831f3f312a75c80d026f181bb996589.scope - libcontainer container 23d4a94804724ca21856f06568d406278831f3f312a75c80d026f181bb996589. Sep 10 23:50:15.253294 containerd[2003]: time="2025-09-10T23:50:15.252983891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2bhvm,Uid:53ba11cf-c601-4b18-a752-492dd9839022,Namespace:calico-system,Attempt:0,} returns sandbox id \"add53c62883f5fb9ea69668a79fefd70d7ecfe756f5b96ae782910df9d7fc224\"" Sep 10 23:50:15.259803 containerd[2003]: time="2025-09-10T23:50:15.259737455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 23:50:15.343342 containerd[2003]: time="2025-09-10T23:50:15.343196376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c86b9654b-2fqb5,Uid:84bf1069-e5fb-4c25-847b-4fca53b7785f,Namespace:calico-system,Attempt:0,} returns sandbox id \"23d4a94804724ca21856f06568d406278831f3f312a75c80d026f181bb996589\"" Sep 10 23:50:16.162551 kubelet[3597]: E0910 23:50:16.161137 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdxn4" podUID="136f9386-4257-4e17-9cdf-6a6536307346" Sep 10 23:50:16.566852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751844898.mount: Deactivated successfully. Sep 10 23:50:16.726300 containerd[2003]: time="2025-09-10T23:50:16.725403086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:16.729737 containerd[2003]: time="2025-09-10T23:50:16.729622886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 10 23:50:16.733131 containerd[2003]: time="2025-09-10T23:50:16.733075550Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:16.740109 containerd[2003]: time="2025-09-10T23:50:16.739871558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:16.743427 containerd[2003]: time="2025-09-10T23:50:16.743185466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.483371955s" Sep 10 23:50:16.743427 containerd[2003]: time="2025-09-10T23:50:16.743289950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 10 23:50:16.747981 containerd[2003]: time="2025-09-10T23:50:16.746833502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 23:50:16.748585 containerd[2003]: time="2025-09-10T23:50:16.748503386Z" level=info msg="CreateContainer within sandbox \"add53c62883f5fb9ea69668a79fefd70d7ecfe756f5b96ae782910df9d7fc224\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 23:50:16.771655 containerd[2003]: time="2025-09-10T23:50:16.770136087Z" level=info msg="Container 9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:16.781407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3640218739.mount: Deactivated successfully. Sep 10 23:50:16.797662 containerd[2003]: time="2025-09-10T23:50:16.797612691Z" level=info msg="CreateContainer within sandbox \"add53c62883f5fb9ea69668a79fefd70d7ecfe756f5b96ae782910df9d7fc224\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6\"" Sep 10 23:50:16.799791 containerd[2003]: time="2025-09-10T23:50:16.799732335Z" level=info msg="StartContainer for \"9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6\"" Sep 10 23:50:16.803027 containerd[2003]: time="2025-09-10T23:50:16.802933851Z" level=info msg="connecting to shim 9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6" address="unix:///run/containerd/s/d37a6c9e70ffbcf9a36e731aec657cb1ef81b91d656d1c888a784052b2b97b1c" protocol=ttrpc version=3 Sep 10 23:50:16.846606 systemd[1]: Started cri-containerd-9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6.scope - libcontainer container 9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6. Sep 10 23:50:16.927655 containerd[2003]: time="2025-09-10T23:50:16.927595611Z" level=info msg="StartContainer for \"9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6\" returns successfully" Sep 10 23:50:16.962522 systemd[1]: cri-containerd-9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6.scope: Deactivated successfully. Sep 10 23:50:16.969781 containerd[2003]: time="2025-09-10T23:50:16.969700348Z" level=info msg="received exit event container_id:\"9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6\" id:\"9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6\" pid:4211 exited_at:{seconds:1757548216 nanos:969024964}" Sep 10 23:50:16.970042 containerd[2003]: time="2025-09-10T23:50:16.969956896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6\" id:\"9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6\" pid:4211 exited_at:{seconds:1757548216 nanos:969024964}" Sep 10 23:50:17.504995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d270d6475f64400bad0acb5b31e335aa0eb00fdd4f49737d2074df45b3be3f6-rootfs.mount: Deactivated successfully. Sep 10 23:50:18.167463 kubelet[3597]: E0910 23:50:18.167047 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdxn4" podUID="136f9386-4257-4e17-9cdf-6a6536307346" Sep 10 23:50:19.474877 containerd[2003]: time="2025-09-10T23:50:19.474823096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:19.477932 containerd[2003]: time="2025-09-10T23:50:19.477876388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=31736396" Sep 10 23:50:19.480040 containerd[2003]: time="2025-09-10T23:50:19.479939980Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:19.485416 containerd[2003]: time="2025-09-10T23:50:19.485330656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:19.486867 containerd[2003]: time="2025-09-10T23:50:19.486678016Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.73977165s" Sep 10 23:50:19.486867 containerd[2003]: time="2025-09-10T23:50:19.486732940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 10 23:50:19.489801 containerd[2003]: time="2025-09-10T23:50:19.489640420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 23:50:19.514621 containerd[2003]: time="2025-09-10T23:50:19.513895780Z" level=info msg="CreateContainer within sandbox \"23d4a94804724ca21856f06568d406278831f3f312a75c80d026f181bb996589\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 23:50:19.525868 containerd[2003]: time="2025-09-10T23:50:19.525814336Z" level=info msg="Container 1477f1a48a1f8933bce5bf3afa690a20ad02409fe17877f2eb5cac036fa11154: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:19.546036 containerd[2003]: time="2025-09-10T23:50:19.545982688Z" level=info msg="CreateContainer within sandbox \"23d4a94804724ca21856f06568d406278831f3f312a75c80d026f181bb996589\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1477f1a48a1f8933bce5bf3afa690a20ad02409fe17877f2eb5cac036fa11154\"" Sep 10 23:50:19.548529 containerd[2003]: time="2025-09-10T23:50:19.548432728Z" level=info msg="StartContainer for \"1477f1a48a1f8933bce5bf3afa690a20ad02409fe17877f2eb5cac036fa11154\"" Sep 10 23:50:19.552445 containerd[2003]: time="2025-09-10T23:50:19.552326212Z" level=info msg="connecting to shim 1477f1a48a1f8933bce5bf3afa690a20ad02409fe17877f2eb5cac036fa11154" address="unix:///run/containerd/s/9225d59cee7486f44fedb5d9509a0710981736a086b3d6f2a13830903532bb81" protocol=ttrpc version=3 Sep 10 23:50:19.603568 systemd[1]: Started cri-containerd-1477f1a48a1f8933bce5bf3afa690a20ad02409fe17877f2eb5cac036fa11154.scope - libcontainer container 1477f1a48a1f8933bce5bf3afa690a20ad02409fe17877f2eb5cac036fa11154. Sep 10 23:50:19.694214 containerd[2003]: time="2025-09-10T23:50:19.694018097Z" level=info msg="StartContainer for \"1477f1a48a1f8933bce5bf3afa690a20ad02409fe17877f2eb5cac036fa11154\" returns successfully" Sep 10 23:50:20.161482 kubelet[3597]: E0910 23:50:20.161095 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdxn4" podUID="136f9386-4257-4e17-9cdf-6a6536307346" Sep 10 23:50:20.454573 kubelet[3597]: I0910 23:50:20.453398 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c86b9654b-2fqb5" podStartSLOduration=3.310580521 podStartE2EDuration="7.453369521s" podCreationTimestamp="2025-09-10 23:50:13 +0000 UTC" firstStartedPulling="2025-09-10 23:50:15.34579374 +0000 UTC m=+31.481304086" lastFinishedPulling="2025-09-10 23:50:19.48858274 +0000 UTC m=+35.624093086" observedRunningTime="2025-09-10 23:50:20.429453665 +0000 UTC m=+36.564964047" watchObservedRunningTime="2025-09-10 23:50:20.453369521 +0000 UTC m=+36.588879891" Sep 10 23:50:22.169012 kubelet[3597]: E0910 23:50:22.168910 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdxn4" podUID="136f9386-4257-4e17-9cdf-6a6536307346" Sep 10 23:50:22.615150 containerd[2003]: time="2025-09-10T23:50:22.615086492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:22.616632 containerd[2003]: time="2025-09-10T23:50:22.616545008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 10 23:50:22.618209 containerd[2003]: time="2025-09-10T23:50:22.617592296Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:22.621039 containerd[2003]: time="2025-09-10T23:50:22.620965364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:22.622757 containerd[2003]: time="2025-09-10T23:50:22.622613276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.132908884s" Sep 10 23:50:22.622757 containerd[2003]: time="2025-09-10T23:50:22.622713176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 10 23:50:22.629473 containerd[2003]: time="2025-09-10T23:50:22.629384024Z" level=info msg="CreateContainer within sandbox \"add53c62883f5fb9ea69668a79fefd70d7ecfe756f5b96ae782910df9d7fc224\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 23:50:22.656351 containerd[2003]: time="2025-09-10T23:50:22.653162048Z" level=info msg="Container 80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:22.660851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1688567876.mount: Deactivated successfully. Sep 10 23:50:22.675104 containerd[2003]: time="2025-09-10T23:50:22.675040748Z" level=info msg="CreateContainer within sandbox \"add53c62883f5fb9ea69668a79fefd70d7ecfe756f5b96ae782910df9d7fc224\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933\"" Sep 10 23:50:22.676969 containerd[2003]: time="2025-09-10T23:50:22.676920620Z" level=info msg="StartContainer for \"80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933\"" Sep 10 23:50:22.682202 containerd[2003]: time="2025-09-10T23:50:22.682150256Z" level=info msg="connecting to shim 80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933" address="unix:///run/containerd/s/d37a6c9e70ffbcf9a36e731aec657cb1ef81b91d656d1c888a784052b2b97b1c" protocol=ttrpc version=3 Sep 10 23:50:22.722557 systemd[1]: Started cri-containerd-80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933.scope - libcontainer container 80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933. Sep 10 23:50:22.807570 containerd[2003]: time="2025-09-10T23:50:22.807505365Z" level=info msg="StartContainer for \"80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933\" returns successfully" Sep 10 23:50:23.767797 systemd[1]: cri-containerd-80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933.scope: Deactivated successfully. Sep 10 23:50:23.768590 systemd[1]: cri-containerd-80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933.scope: Consumed 942ms CPU time, 187.5M memory peak, 165.8M written to disk. Sep 10 23:50:23.772037 containerd[2003]: time="2025-09-10T23:50:23.771950481Z" level=info msg="received exit event container_id:\"80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933\" id:\"80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933\" pid:4316 exited_at:{seconds:1757548223 nanos:771402009}" Sep 10 23:50:23.773414 containerd[2003]: time="2025-09-10T23:50:23.772223937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933\" id:\"80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933\" pid:4316 exited_at:{seconds:1757548223 nanos:771402009}" Sep 10 23:50:23.822014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80adb788abf500342b8e5bd51aa39d0ac3652c8f509b78da2c2b1c72b97a0933-rootfs.mount: Deactivated successfully. Sep 10 23:50:23.835723 kubelet[3597]: I0910 23:50:23.835397 3597 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 23:50:23.930703 systemd[1]: Created slice kubepods-besteffort-pode160ddd3_6a46_4cd7_9f45_54a7ce4bf1eb.slice - libcontainer container kubepods-besteffort-pode160ddd3_6a46_4cd7_9f45_54a7ce4bf1eb.slice. Sep 10 23:50:23.939654 kubelet[3597]: W0910 23:50:23.936840 3597 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-28-68" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-68' and this object Sep 10 23:50:23.939654 kubelet[3597]: E0910 23:50:23.936909 3597 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-28-68\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-68' and this object" logger="UnhandledError" Sep 10 23:50:23.939654 kubelet[3597]: W0910 23:50:23.937095 3597 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ip-172-31-28-68" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-28-68' and this object Sep 10 23:50:23.939654 kubelet[3597]: E0910 23:50:23.937164 3597 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ip-172-31-28-68\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-68' and this object" logger="UnhandledError" Sep 10 23:50:23.942592 kubelet[3597]: I0910 23:50:23.941166 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbk4s\" (UniqueName: \"kubernetes.io/projected/e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb-kube-api-access-jbk4s\") pod \"calico-kube-controllers-677c6fffd9-9nnvf\" (UID: \"e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb\") " pod="calico-system/calico-kube-controllers-677c6fffd9-9nnvf" Sep 10 23:50:23.944335 kubelet[3597]: I0910 23:50:23.942378 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb-tigera-ca-bundle\") pod \"calico-kube-controllers-677c6fffd9-9nnvf\" (UID: \"e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb\") " pod="calico-system/calico-kube-controllers-677c6fffd9-9nnvf" Sep 10 23:50:23.964429 systemd[1]: Created slice kubepods-burstable-podf3070b0e_e694_4250_800c_4411250bc48a.slice - libcontainer container kubepods-burstable-podf3070b0e_e694_4250_800c_4411250bc48a.slice. Sep 10 23:50:23.998024 systemd[1]: Created slice kubepods-burstable-pod6accb33c_2d9a_40dc_89a3_62a9683dfac4.slice - libcontainer container kubepods-burstable-pod6accb33c_2d9a_40dc_89a3_62a9683dfac4.slice. Sep 10 23:50:24.033597 systemd[1]: Created slice kubepods-besteffort-pod555f34ed_e16f_48e0_b1d2_b0ae99507519.slice - libcontainer container kubepods-besteffort-pod555f34ed_e16f_48e0_b1d2_b0ae99507519.slice. Sep 10 23:50:24.044109 kubelet[3597]: I0910 23:50:24.043969 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6aa2277f-343d-4334-98e9-82987b2b5625-whisker-backend-key-pair\") pod \"whisker-67cb5867cd-w2xkc\" (UID: \"6aa2277f-343d-4334-98e9-82987b2b5625\") " pod="calico-system/whisker-67cb5867cd-w2xkc" Sep 10 23:50:24.050454 kubelet[3597]: I0910 23:50:24.044078 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27qgf\" (UniqueName: \"kubernetes.io/projected/6accb33c-2d9a-40dc-89a3-62a9683dfac4-kube-api-access-27qgf\") pod \"coredns-668d6bf9bc-v7n5j\" (UID: \"6accb33c-2d9a-40dc-89a3-62a9683dfac4\") " pod="kube-system/coredns-668d6bf9bc-v7n5j" Sep 10 23:50:24.050614 kubelet[3597]: I0910 23:50:24.050540 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6aa2277f-343d-4334-98e9-82987b2b5625-whisker-ca-bundle\") pod \"whisker-67cb5867cd-w2xkc\" (UID: \"6aa2277f-343d-4334-98e9-82987b2b5625\") " pod="calico-system/whisker-67cb5867cd-w2xkc" Sep 10 23:50:24.050676 kubelet[3597]: I0910 23:50:24.050613 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3070b0e-e694-4250-800c-4411250bc48a-config-volume\") pod \"coredns-668d6bf9bc-v8tt6\" (UID: \"f3070b0e-e694-4250-800c-4411250bc48a\") " pod="kube-system/coredns-668d6bf9bc-v8tt6" Sep 10 23:50:24.050735 kubelet[3597]: I0910 23:50:24.050688 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpdvh\" (UniqueName: \"kubernetes.io/projected/6aa2277f-343d-4334-98e9-82987b2b5625-kube-api-access-gpdvh\") pod \"whisker-67cb5867cd-w2xkc\" (UID: \"6aa2277f-343d-4334-98e9-82987b2b5625\") " pod="calico-system/whisker-67cb5867cd-w2xkc" Sep 10 23:50:24.051334 kubelet[3597]: I0910 23:50:24.050786 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/95691e3f-1910-43a6-be81-55198cb86931-calico-apiserver-certs\") pod \"calico-apiserver-5d84f585bb-jrjmb\" (UID: \"95691e3f-1910-43a6-be81-55198cb86931\") " pod="calico-apiserver/calico-apiserver-5d84f585bb-jrjmb" Sep 10 23:50:24.051334 kubelet[3597]: I0910 23:50:24.051210 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fb4b\" (UniqueName: \"kubernetes.io/projected/95691e3f-1910-43a6-be81-55198cb86931-kube-api-access-5fb4b\") pod \"calico-apiserver-5d84f585bb-jrjmb\" (UID: \"95691e3f-1910-43a6-be81-55198cb86931\") " pod="calico-apiserver/calico-apiserver-5d84f585bb-jrjmb" Sep 10 23:50:24.051924 kubelet[3597]: I0910 23:50:24.051741 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed-config\") pod \"goldmane-54d579b49d-r2cst\" (UID: \"2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed\") " pod="calico-system/goldmane-54d579b49d-r2cst" Sep 10 23:50:24.053328 kubelet[3597]: I0910 23:50:24.053122 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgqzw\" (UniqueName: \"kubernetes.io/projected/555f34ed-e16f-48e0-b1d2-b0ae99507519-kube-api-access-sgqzw\") pod \"calico-apiserver-5d84f585bb-8qbrf\" (UID: \"555f34ed-e16f-48e0-b1d2-b0ae99507519\") " pod="calico-apiserver/calico-apiserver-5d84f585bb-8qbrf" Sep 10 23:50:24.053497 kubelet[3597]: I0910 23:50:24.053472 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-r2cst\" (UID: \"2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed\") " pod="calico-system/goldmane-54d579b49d-r2cst" Sep 10 23:50:24.053681 kubelet[3597]: I0910 23:50:24.053636 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed-goldmane-key-pair\") pod \"goldmane-54d579b49d-r2cst\" (UID: \"2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed\") " pod="calico-system/goldmane-54d579b49d-r2cst" Sep 10 23:50:24.053891 kubelet[3597]: I0910 23:50:24.053829 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcspl\" (UniqueName: \"kubernetes.io/projected/2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed-kube-api-access-fcspl\") pod \"goldmane-54d579b49d-r2cst\" (UID: \"2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed\") " pod="calico-system/goldmane-54d579b49d-r2cst" Sep 10 23:50:24.055343 kubelet[3597]: I0910 23:50:24.055162 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6accb33c-2d9a-40dc-89a3-62a9683dfac4-config-volume\") pod \"coredns-668d6bf9bc-v7n5j\" (UID: \"6accb33c-2d9a-40dc-89a3-62a9683dfac4\") " pod="kube-system/coredns-668d6bf9bc-v7n5j" Sep 10 23:50:24.055521 kubelet[3597]: I0910 23:50:24.055475 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/555f34ed-e16f-48e0-b1d2-b0ae99507519-calico-apiserver-certs\") pod \"calico-apiserver-5d84f585bb-8qbrf\" (UID: \"555f34ed-e16f-48e0-b1d2-b0ae99507519\") " pod="calico-apiserver/calico-apiserver-5d84f585bb-8qbrf" Sep 10 23:50:24.056754 kubelet[3597]: I0910 23:50:24.056047 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwjjs\" (UniqueName: \"kubernetes.io/projected/f3070b0e-e694-4250-800c-4411250bc48a-kube-api-access-lwjjs\") pod \"coredns-668d6bf9bc-v8tt6\" (UID: \"f3070b0e-e694-4250-800c-4411250bc48a\") " pod="kube-system/coredns-668d6bf9bc-v8tt6" Sep 10 23:50:24.108626 systemd[1]: Created slice kubepods-besteffort-pod2b3bf6ae_6a35_4ecc_96ce_ee9e408d77ed.slice - libcontainer container kubepods-besteffort-pod2b3bf6ae_6a35_4ecc_96ce_ee9e408d77ed.slice. Sep 10 23:50:24.122867 systemd[1]: Created slice kubepods-besteffort-pod95691e3f_1910_43a6_be81_55198cb86931.slice - libcontainer container kubepods-besteffort-pod95691e3f_1910_43a6_be81_55198cb86931.slice. Sep 10 23:50:24.141223 systemd[1]: Created slice kubepods-besteffort-pod6aa2277f_343d_4334_98e9_82987b2b5625.slice - libcontainer container kubepods-besteffort-pod6aa2277f_343d_4334_98e9_82987b2b5625.slice. Sep 10 23:50:24.254980 containerd[2003]: time="2025-09-10T23:50:24.251733836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677c6fffd9-9nnvf,Uid:e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb,Namespace:calico-system,Attempt:0,}" Sep 10 23:50:24.259072 systemd[1]: Created slice kubepods-besteffort-pod136f9386_4257_4e17_9cdf_6a6536307346.slice - libcontainer container kubepods-besteffort-pod136f9386_4257_4e17_9cdf_6a6536307346.slice. Sep 10 23:50:24.277955 containerd[2003]: time="2025-09-10T23:50:24.277881800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wdxn4,Uid:136f9386-4257-4e17-9cdf-6a6536307346,Namespace:calico-system,Attempt:0,}" Sep 10 23:50:24.372389 containerd[2003]: time="2025-09-10T23:50:24.371933300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f585bb-8qbrf,Uid:555f34ed-e16f-48e0-b1d2-b0ae99507519,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:50:24.447648 containerd[2003]: time="2025-09-10T23:50:24.447415893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f585bb-jrjmb,Uid:95691e3f-1910-43a6-be81-55198cb86931,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:50:24.455516 containerd[2003]: time="2025-09-10T23:50:24.455469825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67cb5867cd-w2xkc,Uid:6aa2277f-343d-4334-98e9-82987b2b5625,Namespace:calico-system,Attempt:0,}" Sep 10 23:50:24.594943 containerd[2003]: time="2025-09-10T23:50:24.594873909Z" level=error msg="Failed to destroy network for sandbox \"831770dd3367e9d908f12534711aef7b0f17ba53336d26dfd19a47fcd6d1899b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:24.747084 containerd[2003]: time="2025-09-10T23:50:24.746808250Z" level=error msg="Failed to destroy network for sandbox \"783c221b563e76424a28b75544206b80062c384df7fc200d309df0392413c8ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:24.970018 containerd[2003]: time="2025-09-10T23:50:24.969712727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677c6fffd9-9nnvf,Uid:e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"831770dd3367e9d908f12534711aef7b0f17ba53336d26dfd19a47fcd6d1899b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:24.973032 containerd[2003]: time="2025-09-10T23:50:24.972824147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wdxn4,Uid:136f9386-4257-4e17-9cdf-6a6536307346,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"783c221b563e76424a28b75544206b80062c384df7fc200d309df0392413c8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:24.973362 kubelet[3597]: E0910 23:50:24.972899 3597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"831770dd3367e9d908f12534711aef7b0f17ba53336d26dfd19a47fcd6d1899b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:24.975965 kubelet[3597]: E0910 23:50:24.974062 3597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"831770dd3367e9d908f12534711aef7b0f17ba53336d26dfd19a47fcd6d1899b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677c6fffd9-9nnvf" Sep 10 23:50:24.975965 kubelet[3597]: E0910 23:50:24.975357 3597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"831770dd3367e9d908f12534711aef7b0f17ba53336d26dfd19a47fcd6d1899b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677c6fffd9-9nnvf" Sep 10 23:50:24.975965 kubelet[3597]: E0910 23:50:24.975528 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677c6fffd9-9nnvf_calico-system(e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677c6fffd9-9nnvf_calico-system(e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"831770dd3367e9d908f12534711aef7b0f17ba53336d26dfd19a47fcd6d1899b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677c6fffd9-9nnvf" podUID="e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb" Sep 10 23:50:24.979710 kubelet[3597]: E0910 23:50:24.979640 3597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"783c221b563e76424a28b75544206b80062c384df7fc200d309df0392413c8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:24.980294 kubelet[3597]: E0910 23:50:24.979980 3597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"783c221b563e76424a28b75544206b80062c384df7fc200d309df0392413c8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wdxn4" Sep 10 23:50:24.980294 kubelet[3597]: E0910 23:50:24.980055 3597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"783c221b563e76424a28b75544206b80062c384df7fc200d309df0392413c8ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wdxn4" Sep 10 23:50:24.980848 kubelet[3597]: E0910 23:50:24.980764 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wdxn4_calico-system(136f9386-4257-4e17-9cdf-6a6536307346)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wdxn4_calico-system(136f9386-4257-4e17-9cdf-6a6536307346)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"783c221b563e76424a28b75544206b80062c384df7fc200d309df0392413c8ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wdxn4" podUID="136f9386-4257-4e17-9cdf-6a6536307346" Sep 10 23:50:25.046374 containerd[2003]: time="2025-09-10T23:50:25.046296956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-r2cst,Uid:2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed,Namespace:calico-system,Attempt:0,}" Sep 10 23:50:25.155727 containerd[2003]: time="2025-09-10T23:50:25.155349008Z" level=error msg="Failed to destroy network for sandbox \"ef5ff8482b11ab241f66b05ca101254d45b4c4cb6d1c527e4795643c0bdde960\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.166125 containerd[2003]: time="2025-09-10T23:50:25.166033340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f585bb-8qbrf,Uid:555f34ed-e16f-48e0-b1d2-b0ae99507519,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5ff8482b11ab241f66b05ca101254d45b4c4cb6d1c527e4795643c0bdde960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.169968 kubelet[3597]: E0910 23:50:25.167487 3597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5ff8482b11ab241f66b05ca101254d45b4c4cb6d1c527e4795643c0bdde960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.169968 kubelet[3597]: E0910 23:50:25.167582 3597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5ff8482b11ab241f66b05ca101254d45b4c4cb6d1c527e4795643c0bdde960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d84f585bb-8qbrf" Sep 10 23:50:25.169968 kubelet[3597]: E0910 23:50:25.167622 3597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5ff8482b11ab241f66b05ca101254d45b4c4cb6d1c527e4795643c0bdde960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d84f585bb-8qbrf" Sep 10 23:50:25.168905 systemd[1]: run-netns-cni\x2d6b69033f\x2d6643\x2d85b0\x2df4f0\x2dacc3e9213b39.mount: Deactivated successfully. Sep 10 23:50:25.172727 kubelet[3597]: E0910 23:50:25.167704 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d84f585bb-8qbrf_calico-apiserver(555f34ed-e16f-48e0-b1d2-b0ae99507519)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d84f585bb-8qbrf_calico-apiserver(555f34ed-e16f-48e0-b1d2-b0ae99507519)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef5ff8482b11ab241f66b05ca101254d45b4c4cb6d1c527e4795643c0bdde960\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d84f585bb-8qbrf" podUID="555f34ed-e16f-48e0-b1d2-b0ae99507519" Sep 10 23:50:25.187045 containerd[2003]: time="2025-09-10T23:50:25.186982508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v8tt6,Uid:f3070b0e-e694-4250-800c-4411250bc48a,Namespace:kube-system,Attempt:0,}" Sep 10 23:50:25.230290 containerd[2003]: time="2025-09-10T23:50:25.228786441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7n5j,Uid:6accb33c-2d9a-40dc-89a3-62a9683dfac4,Namespace:kube-system,Attempt:0,}" Sep 10 23:50:25.287530 containerd[2003]: time="2025-09-10T23:50:25.287464233Z" level=error msg="Failed to destroy network for sandbox \"55bf3b1c9af4814cd3639a49d8f0c181f03f84b52547d33ae95f60bb125a2f46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.291295 containerd[2003]: time="2025-09-10T23:50:25.291165369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f585bb-jrjmb,Uid:95691e3f-1910-43a6-be81-55198cb86931,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bf3b1c9af4814cd3639a49d8f0c181f03f84b52547d33ae95f60bb125a2f46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.292108 kubelet[3597]: E0910 23:50:25.292027 3597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bf3b1c9af4814cd3639a49d8f0c181f03f84b52547d33ae95f60bb125a2f46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.292298 kubelet[3597]: E0910 23:50:25.292120 3597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bf3b1c9af4814cd3639a49d8f0c181f03f84b52547d33ae95f60bb125a2f46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d84f585bb-jrjmb" Sep 10 23:50:25.292298 kubelet[3597]: E0910 23:50:25.292160 3597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bf3b1c9af4814cd3639a49d8f0c181f03f84b52547d33ae95f60bb125a2f46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d84f585bb-jrjmb" Sep 10 23:50:25.295240 kubelet[3597]: E0910 23:50:25.292243 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d84f585bb-jrjmb_calico-apiserver(95691e3f-1910-43a6-be81-55198cb86931)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d84f585bb-jrjmb_calico-apiserver(95691e3f-1910-43a6-be81-55198cb86931)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55bf3b1c9af4814cd3639a49d8f0c181f03f84b52547d33ae95f60bb125a2f46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d84f585bb-jrjmb" podUID="95691e3f-1910-43a6-be81-55198cb86931" Sep 10 23:50:25.326077 containerd[2003]: time="2025-09-10T23:50:25.325770861Z" level=error msg="Failed to destroy network for sandbox \"199ea558ca84f935e195666eb4c43db9d6420556b244ae6731e7971fa179db6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.330039 containerd[2003]: time="2025-09-10T23:50:25.329923509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67cb5867cd-w2xkc,Uid:6aa2277f-343d-4334-98e9-82987b2b5625,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"199ea558ca84f935e195666eb4c43db9d6420556b244ae6731e7971fa179db6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.332395 kubelet[3597]: E0910 23:50:25.331723 3597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199ea558ca84f935e195666eb4c43db9d6420556b244ae6731e7971fa179db6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.332395 kubelet[3597]: E0910 23:50:25.331814 3597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199ea558ca84f935e195666eb4c43db9d6420556b244ae6731e7971fa179db6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67cb5867cd-w2xkc" Sep 10 23:50:25.332395 kubelet[3597]: E0910 23:50:25.331858 3597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199ea558ca84f935e195666eb4c43db9d6420556b244ae6731e7971fa179db6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67cb5867cd-w2xkc" Sep 10 23:50:25.332745 kubelet[3597]: E0910 23:50:25.331938 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67cb5867cd-w2xkc_calico-system(6aa2277f-343d-4334-98e9-82987b2b5625)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67cb5867cd-w2xkc_calico-system(6aa2277f-343d-4334-98e9-82987b2b5625)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"199ea558ca84f935e195666eb4c43db9d6420556b244ae6731e7971fa179db6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67cb5867cd-w2xkc" podUID="6aa2277f-343d-4334-98e9-82987b2b5625" Sep 10 23:50:25.365318 containerd[2003]: time="2025-09-10T23:50:25.364837821Z" level=error msg="Failed to destroy network for sandbox \"d18fffa2c818f185afd53d2e646bcacb946ec030b57208d4b82b64373ed0672a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.367727 containerd[2003]: time="2025-09-10T23:50:25.367621593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-r2cst,Uid:2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d18fffa2c818f185afd53d2e646bcacb946ec030b57208d4b82b64373ed0672a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.368339 kubelet[3597]: E0910 23:50:25.367969 3597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d18fffa2c818f185afd53d2e646bcacb946ec030b57208d4b82b64373ed0672a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.368339 kubelet[3597]: E0910 23:50:25.368051 3597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d18fffa2c818f185afd53d2e646bcacb946ec030b57208d4b82b64373ed0672a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-r2cst" Sep 10 23:50:25.368339 kubelet[3597]: E0910 23:50:25.368087 3597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d18fffa2c818f185afd53d2e646bcacb946ec030b57208d4b82b64373ed0672a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-r2cst" Sep 10 23:50:25.370503 kubelet[3597]: E0910 23:50:25.368175 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-r2cst_calico-system(2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-r2cst_calico-system(2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d18fffa2c818f185afd53d2e646bcacb946ec030b57208d4b82b64373ed0672a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-r2cst" podUID="2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed" Sep 10 23:50:25.425639 containerd[2003]: time="2025-09-10T23:50:25.425486314Z" level=error msg="Failed to destroy network for sandbox \"0e375020caf60343359f72f839ce2c663e4787b03ad56b877d1415d96175d636\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.427532 containerd[2003]: time="2025-09-10T23:50:25.427389958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v8tt6,Uid:f3070b0e-e694-4250-800c-4411250bc48a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e375020caf60343359f72f839ce2c663e4787b03ad56b877d1415d96175d636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.428533 kubelet[3597]: E0910 23:50:25.428467 3597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e375020caf60343359f72f839ce2c663e4787b03ad56b877d1415d96175d636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.428668 kubelet[3597]: E0910 23:50:25.428561 3597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e375020caf60343359f72f839ce2c663e4787b03ad56b877d1415d96175d636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v8tt6" Sep 10 23:50:25.428668 kubelet[3597]: E0910 23:50:25.428602 3597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e375020caf60343359f72f839ce2c663e4787b03ad56b877d1415d96175d636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v8tt6" Sep 10 23:50:25.428834 kubelet[3597]: E0910 23:50:25.428703 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v8tt6_kube-system(f3070b0e-e694-4250-800c-4411250bc48a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v8tt6_kube-system(f3070b0e-e694-4250-800c-4411250bc48a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e375020caf60343359f72f839ce2c663e4787b03ad56b877d1415d96175d636\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v8tt6" podUID="f3070b0e-e694-4250-800c-4411250bc48a" Sep 10 23:50:25.448092 containerd[2003]: time="2025-09-10T23:50:25.448010806Z" level=error msg="Failed to destroy network for sandbox \"e6459162d57a86c6410810997823518a5a2eefc56ba666c50c3bb28bbbf3def9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.451037 containerd[2003]: time="2025-09-10T23:50:25.450913846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7n5j,Uid:6accb33c-2d9a-40dc-89a3-62a9683dfac4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6459162d57a86c6410810997823518a5a2eefc56ba666c50c3bb28bbbf3def9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.451515 kubelet[3597]: E0910 23:50:25.451342 3597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6459162d57a86c6410810997823518a5a2eefc56ba666c50c3bb28bbbf3def9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:50:25.451515 kubelet[3597]: E0910 23:50:25.451429 3597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6459162d57a86c6410810997823518a5a2eefc56ba666c50c3bb28bbbf3def9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7n5j" Sep 10 23:50:25.451515 kubelet[3597]: E0910 23:50:25.451469 3597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6459162d57a86c6410810997823518a5a2eefc56ba666c50c3bb28bbbf3def9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7n5j" Sep 10 23:50:25.451746 kubelet[3597]: E0910 23:50:25.451539 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v7n5j_kube-system(6accb33c-2d9a-40dc-89a3-62a9683dfac4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v7n5j_kube-system(6accb33c-2d9a-40dc-89a3-62a9683dfac4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6459162d57a86c6410810997823518a5a2eefc56ba666c50c3bb28bbbf3def9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v7n5j" podUID="6accb33c-2d9a-40dc-89a3-62a9683dfac4" Sep 10 23:50:25.458919 containerd[2003]: time="2025-09-10T23:50:25.458573914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 23:50:25.817740 systemd[1]: run-netns-cni\x2de4086613\x2d9e38\x2d0e5d\x2d3b71\x2d63e2822f8727.mount: Deactivated successfully. Sep 10 23:50:25.817957 systemd[1]: run-netns-cni\x2dd078375d\x2d3eef\x2db3c7\x2d488a\x2dfa3432f0b9ff.mount: Deactivated successfully. Sep 10 23:50:25.818085 systemd[1]: run-netns-cni\x2de0f439b1\x2d6c6d\x2d8ff4\x2d5adf\x2d3263a56fce06.mount: Deactivated successfully. Sep 10 23:50:25.818208 systemd[1]: run-netns-cni\x2d586dabef\x2d271b\x2d54d4\x2d8c04\x2d7b85bbc35791.mount: Deactivated successfully. Sep 10 23:50:32.569827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3296712169.mount: Deactivated successfully. Sep 10 23:50:32.629358 containerd[2003]: time="2025-09-10T23:50:32.629226869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:32.631578 containerd[2003]: time="2025-09-10T23:50:32.631503917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 10 23:50:32.632093 containerd[2003]: time="2025-09-10T23:50:32.632018273Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:32.635930 containerd[2003]: time="2025-09-10T23:50:32.635871857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:32.637211 containerd[2003]: time="2025-09-10T23:50:32.637141721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 7.178503571s" Sep 10 23:50:32.638212 containerd[2003]: time="2025-09-10T23:50:32.637212593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 10 23:50:32.682405 containerd[2003]: time="2025-09-10T23:50:32.682215222Z" level=info msg="CreateContainer within sandbox \"add53c62883f5fb9ea69668a79fefd70d7ecfe756f5b96ae782910df9d7fc224\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 23:50:32.701694 containerd[2003]: time="2025-09-10T23:50:32.701628942Z" level=info msg="Container b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:32.721042 containerd[2003]: time="2025-09-10T23:50:32.720853242Z" level=info msg="CreateContainer within sandbox \"add53c62883f5fb9ea69668a79fefd70d7ecfe756f5b96ae782910df9d7fc224\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286\"" Sep 10 23:50:32.723305 containerd[2003]: time="2025-09-10T23:50:32.722862642Z" level=info msg="StartContainer for \"b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286\"" Sep 10 23:50:32.727192 containerd[2003]: time="2025-09-10T23:50:32.727120602Z" level=info msg="connecting to shim b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286" address="unix:///run/containerd/s/d37a6c9e70ffbcf9a36e731aec657cb1ef81b91d656d1c888a784052b2b97b1c" protocol=ttrpc version=3 Sep 10 23:50:32.776576 systemd[1]: Started cri-containerd-b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286.scope - libcontainer container b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286. Sep 10 23:50:32.893930 containerd[2003]: time="2025-09-10T23:50:32.893028571Z" level=info msg="StartContainer for \"b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286\" returns successfully" Sep 10 23:50:33.178360 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 23:50:33.178509 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 23:50:33.555291 kubelet[3597]: I0910 23:50:33.554721 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6aa2277f-343d-4334-98e9-82987b2b5625-whisker-backend-key-pair\") pod \"6aa2277f-343d-4334-98e9-82987b2b5625\" (UID: \"6aa2277f-343d-4334-98e9-82987b2b5625\") " Sep 10 23:50:33.555291 kubelet[3597]: I0910 23:50:33.554824 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6aa2277f-343d-4334-98e9-82987b2b5625-whisker-ca-bundle\") pod \"6aa2277f-343d-4334-98e9-82987b2b5625\" (UID: \"6aa2277f-343d-4334-98e9-82987b2b5625\") " Sep 10 23:50:33.555291 kubelet[3597]: I0910 23:50:33.554869 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpdvh\" (UniqueName: \"kubernetes.io/projected/6aa2277f-343d-4334-98e9-82987b2b5625-kube-api-access-gpdvh\") pod \"6aa2277f-343d-4334-98e9-82987b2b5625\" (UID: \"6aa2277f-343d-4334-98e9-82987b2b5625\") " Sep 10 23:50:33.562304 kubelet[3597]: I0910 23:50:33.560989 3597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa2277f-343d-4334-98e9-82987b2b5625-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6aa2277f-343d-4334-98e9-82987b2b5625" (UID: "6aa2277f-343d-4334-98e9-82987b2b5625"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:50:33.591209 systemd[1]: var-lib-kubelet-pods-6aa2277f\x2d343d\x2d4334\x2d98e9\x2d82987b2b5625-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgpdvh.mount: Deactivated successfully. Sep 10 23:50:33.598948 kubelet[3597]: I0910 23:50:33.595598 3597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa2277f-343d-4334-98e9-82987b2b5625-kube-api-access-gpdvh" (OuterVolumeSpecName: "kube-api-access-gpdvh") pod "6aa2277f-343d-4334-98e9-82987b2b5625" (UID: "6aa2277f-343d-4334-98e9-82987b2b5625"). InnerVolumeSpecName "kube-api-access-gpdvh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:50:33.598948 kubelet[3597]: I0910 23:50:33.598846 3597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa2277f-343d-4334-98e9-82987b2b5625-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6aa2277f-343d-4334-98e9-82987b2b5625" (UID: "6aa2277f-343d-4334-98e9-82987b2b5625"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 23:50:33.599836 systemd[1]: var-lib-kubelet-pods-6aa2277f\x2d343d\x2d4334\x2d98e9\x2d82987b2b5625-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 23:50:33.656127 kubelet[3597]: I0910 23:50:33.656050 3597 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6aa2277f-343d-4334-98e9-82987b2b5625-whisker-backend-key-pair\") on node \"ip-172-31-28-68\" DevicePath \"\"" Sep 10 23:50:33.656127 kubelet[3597]: I0910 23:50:33.656116 3597 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6aa2277f-343d-4334-98e9-82987b2b5625-whisker-ca-bundle\") on node \"ip-172-31-28-68\" DevicePath \"\"" Sep 10 23:50:33.657686 kubelet[3597]: I0910 23:50:33.656142 3597 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gpdvh\" (UniqueName: \"kubernetes.io/projected/6aa2277f-343d-4334-98e9-82987b2b5625-kube-api-access-gpdvh\") on node \"ip-172-31-28-68\" DevicePath \"\"" Sep 10 23:50:33.822910 systemd[1]: Removed slice kubepods-besteffort-pod6aa2277f_343d_4334_98e9_82987b2b5625.slice - libcontainer container kubepods-besteffort-pod6aa2277f_343d_4334_98e9_82987b2b5625.slice. Sep 10 23:50:33.853133 kubelet[3597]: I0910 23:50:33.853033 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2bhvm" podStartSLOduration=3.471830533 podStartE2EDuration="20.853000567s" podCreationTimestamp="2025-09-10 23:50:13 +0000 UTC" firstStartedPulling="2025-09-10 23:50:15.258134891 +0000 UTC m=+31.393645237" lastFinishedPulling="2025-09-10 23:50:32.639304925 +0000 UTC m=+48.774815271" observedRunningTime="2025-09-10 23:50:33.582203334 +0000 UTC m=+49.717713896" watchObservedRunningTime="2025-09-10 23:50:33.853000567 +0000 UTC m=+49.988510913" Sep 10 23:50:33.926344 containerd[2003]: time="2025-09-10T23:50:33.925928660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286\" id:\"58f82a58c3afe6037abdf267e8257704d5254183fdb104ec9faa7f8c9ba5b4ca\" pid:4634 exit_status:1 exited_at:{seconds:1757548233 nanos:924810764}" Sep 10 23:50:33.971079 systemd[1]: Created slice kubepods-besteffort-pod594c3381_49cd_494d_b220_1d177c55a28b.slice - libcontainer container kubepods-besteffort-pod594c3381_49cd_494d_b220_1d177c55a28b.slice. Sep 10 23:50:34.061329 kubelet[3597]: I0910 23:50:34.061199 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq76s\" (UniqueName: \"kubernetes.io/projected/594c3381-49cd-494d-b220-1d177c55a28b-kube-api-access-zq76s\") pod \"whisker-56d665d69-dr8hz\" (UID: \"594c3381-49cd-494d-b220-1d177c55a28b\") " pod="calico-system/whisker-56d665d69-dr8hz" Sep 10 23:50:34.061673 kubelet[3597]: I0910 23:50:34.061422 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/594c3381-49cd-494d-b220-1d177c55a28b-whisker-ca-bundle\") pod \"whisker-56d665d69-dr8hz\" (UID: \"594c3381-49cd-494d-b220-1d177c55a28b\") " pod="calico-system/whisker-56d665d69-dr8hz" Sep 10 23:50:34.061673 kubelet[3597]: I0910 23:50:34.061476 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/594c3381-49cd-494d-b220-1d177c55a28b-whisker-backend-key-pair\") pod \"whisker-56d665d69-dr8hz\" (UID: \"594c3381-49cd-494d-b220-1d177c55a28b\") " pod="calico-system/whisker-56d665d69-dr8hz" Sep 10 23:50:34.176406 kubelet[3597]: I0910 23:50:34.172952 3597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa2277f-343d-4334-98e9-82987b2b5625" path="/var/lib/kubelet/pods/6aa2277f-343d-4334-98e9-82987b2b5625/volumes" Sep 10 23:50:34.282785 containerd[2003]: time="2025-09-10T23:50:34.282693606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d665d69-dr8hz,Uid:594c3381-49cd-494d-b220-1d177c55a28b,Namespace:calico-system,Attempt:0,}" Sep 10 23:50:34.656901 (udev-worker)[4610]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:50:34.660191 systemd-networkd[1914]: cali30463567bdf: Link UP Sep 10 23:50:34.663325 systemd-networkd[1914]: cali30463567bdf: Gained carrier Sep 10 23:50:34.709017 containerd[2003]: 2025-09-10 23:50:34.339 [INFO][4660] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:50:34.709017 containerd[2003]: 2025-09-10 23:50:34.453 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0 whisker-56d665d69- calico-system 594c3381-49cd-494d-b220-1d177c55a28b 892 0 2025-09-10 23:50:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56d665d69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-68 whisker-56d665d69-dr8hz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali30463567bdf [] [] }} ContainerID="06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" Namespace="calico-system" Pod="whisker-56d665d69-dr8hz" WorkloadEndpoint="ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-" Sep 10 23:50:34.709017 containerd[2003]: 2025-09-10 23:50:34.453 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" Namespace="calico-system" Pod="whisker-56d665d69-dr8hz" WorkloadEndpoint="ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0" Sep 10 23:50:34.709017 containerd[2003]: 2025-09-10 23:50:34.554 [INFO][4673] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" HandleID="k8s-pod-network.06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" Workload="ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0" Sep 10 23:50:34.709565 containerd[2003]: 2025-09-10 23:50:34.554 [INFO][4673] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" HandleID="k8s-pod-network.06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" Workload="ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001224c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-68", "pod":"whisker-56d665d69-dr8hz", "timestamp":"2025-09-10 23:50:34.553991419 +0000 UTC"}, Hostname:"ip-172-31-28-68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:50:34.709565 containerd[2003]: 2025-09-10 23:50:34.554 [INFO][4673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:50:34.709565 containerd[2003]: 2025-09-10 23:50:34.554 [INFO][4673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:50:34.709565 containerd[2003]: 2025-09-10 23:50:34.555 [INFO][4673] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-68' Sep 10 23:50:34.709565 containerd[2003]: 2025-09-10 23:50:34.579 [INFO][4673] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" host="ip-172-31-28-68" Sep 10 23:50:34.709565 containerd[2003]: 2025-09-10 23:50:34.591 [INFO][4673] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-68" Sep 10 23:50:34.709565 containerd[2003]: 2025-09-10 23:50:34.599 [INFO][4673] ipam/ipam.go 511: Trying affinity for 192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:34.709565 containerd[2003]: 2025-09-10 23:50:34.603 [INFO][4673] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:34.709565 containerd[2003]: 2025-09-10 23:50:34.607 [INFO][4673] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:34.709565 containerd[2003]: 2025-09-10 23:50:34.607 [INFO][4673] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.192/26 handle="k8s-pod-network.06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" host="ip-172-31-28-68" Sep 10 23:50:34.710452 containerd[2003]: 2025-09-10 23:50:34.610 [INFO][4673] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e Sep 10 23:50:34.710452 containerd[2003]: 2025-09-10 23:50:34.617 [INFO][4673] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.192/26 handle="k8s-pod-network.06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" host="ip-172-31-28-68" Sep 10 23:50:34.710452 containerd[2003]: 2025-09-10 23:50:34.628 [INFO][4673] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.193/26] block=192.168.46.192/26 handle="k8s-pod-network.06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" host="ip-172-31-28-68" Sep 10 23:50:34.710452 containerd[2003]: 2025-09-10 23:50:34.628 [INFO][4673] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.193/26] handle="k8s-pod-network.06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" host="ip-172-31-28-68" Sep 10 23:50:34.710452 containerd[2003]: 2025-09-10 23:50:34.628 [INFO][4673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:50:34.710452 containerd[2003]: 2025-09-10 23:50:34.628 [INFO][4673] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.193/26] IPv6=[] ContainerID="06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" HandleID="k8s-pod-network.06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" Workload="ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0" Sep 10 23:50:34.711109 containerd[2003]: 2025-09-10 23:50:34.636 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" Namespace="calico-system" Pod="whisker-56d665d69-dr8hz" WorkloadEndpoint="ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0", GenerateName:"whisker-56d665d69-", Namespace:"calico-system", SelfLink:"", UID:"594c3381-49cd-494d-b220-1d177c55a28b", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 50, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56d665d69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"", Pod:"whisker-56d665d69-dr8hz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.46.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali30463567bdf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:34.711109 containerd[2003]: 2025-09-10 23:50:34.637 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.193/32] ContainerID="06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" Namespace="calico-system" Pod="whisker-56d665d69-dr8hz" WorkloadEndpoint="ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0" Sep 10 23:50:34.711643 containerd[2003]: 2025-09-10 23:50:34.637 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30463567bdf ContainerID="06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" Namespace="calico-system" Pod="whisker-56d665d69-dr8hz" WorkloadEndpoint="ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0" Sep 10 23:50:34.711643 containerd[2003]: 2025-09-10 23:50:34.664 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" Namespace="calico-system" Pod="whisker-56d665d69-dr8hz" WorkloadEndpoint="ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0" Sep 10 23:50:34.711755 containerd[2003]: 2025-09-10 23:50:34.668 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" Namespace="calico-system" Pod="whisker-56d665d69-dr8hz" WorkloadEndpoint="ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0", GenerateName:"whisker-56d665d69-", Namespace:"calico-system", SelfLink:"", UID:"594c3381-49cd-494d-b220-1d177c55a28b", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 50, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56d665d69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e", Pod:"whisker-56d665d69-dr8hz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.46.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali30463567bdf", MAC:"76:e6:68:be:f8:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:34.711894 containerd[2003]: 2025-09-10 23:50:34.698 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" Namespace="calico-system" Pod="whisker-56d665d69-dr8hz" WorkloadEndpoint="ip--172--31--28--68-k8s-whisker--56d665d69--dr8hz-eth0" Sep 10 23:50:34.760979 containerd[2003]: time="2025-09-10T23:50:34.760749776Z" level=info msg="connecting to shim 06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e" address="unix:///run/containerd/s/ec7f47b09bb8c7487898b03c726d47d016bc6be05fc6f8de8bc269062a5dac85" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:50:34.809043 containerd[2003]: time="2025-09-10T23:50:34.808980548Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286\" id:\"bc2d8c8452b93d571228236bdc96ebde860fb99e98b6eb30d61a5929f83623da\" pid:4692 exit_status:1 exited_at:{seconds:1757548234 nanos:807017924}" Sep 10 23:50:34.841560 systemd[1]: Started cri-containerd-06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e.scope - libcontainer container 06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e. Sep 10 23:50:34.929478 containerd[2003]: time="2025-09-10T23:50:34.929179353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d665d69-dr8hz,Uid:594c3381-49cd-494d-b220-1d177c55a28b,Namespace:calico-system,Attempt:0,} returns sandbox id \"06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e\"" Sep 10 23:50:34.934154 containerd[2003]: time="2025-09-10T23:50:34.934109097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 23:50:35.961398 systemd-networkd[1914]: cali30463567bdf: Gained IPv6LL Sep 10 23:50:36.162940 containerd[2003]: time="2025-09-10T23:50:36.162816235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677c6fffd9-9nnvf,Uid:e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb,Namespace:calico-system,Attempt:0,}" Sep 10 23:50:36.483679 systemd-networkd[1914]: cali75c27cb0947: Link UP Sep 10 23:50:36.486074 systemd-networkd[1914]: cali75c27cb0947: Gained carrier Sep 10 23:50:36.546496 containerd[2003]: 2025-09-10 23:50:36.280 [INFO][4880] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0 calico-kube-controllers-677c6fffd9- calico-system e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb 821 0 2025-09-10 23:50:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:677c6fffd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-68 calico-kube-controllers-677c6fffd9-9nnvf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali75c27cb0947 [] [] }} ContainerID="b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" Namespace="calico-system" Pod="calico-kube-controllers-677c6fffd9-9nnvf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-" Sep 10 23:50:36.546496 containerd[2003]: 2025-09-10 23:50:36.282 [INFO][4880] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" Namespace="calico-system" Pod="calico-kube-controllers-677c6fffd9-9nnvf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0" Sep 10 23:50:36.546496 containerd[2003]: 2025-09-10 23:50:36.353 [INFO][4894] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" HandleID="k8s-pod-network.b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" Workload="ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0" Sep 10 23:50:36.546958 containerd[2003]: 2025-09-10 23:50:36.353 [INFO][4894] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" HandleID="k8s-pod-network.b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" Workload="ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3a30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-68", "pod":"calico-kube-controllers-677c6fffd9-9nnvf", "timestamp":"2025-09-10 23:50:36.353200724 +0000 UTC"}, Hostname:"ip-172-31-28-68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:50:36.546958 containerd[2003]: 2025-09-10 23:50:36.353 [INFO][4894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:50:36.546958 containerd[2003]: 2025-09-10 23:50:36.353 [INFO][4894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:50:36.546958 containerd[2003]: 2025-09-10 23:50:36.353 [INFO][4894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-68' Sep 10 23:50:36.546958 containerd[2003]: 2025-09-10 23:50:36.373 [INFO][4894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" host="ip-172-31-28-68" Sep 10 23:50:36.546958 containerd[2003]: 2025-09-10 23:50:36.388 [INFO][4894] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-68" Sep 10 23:50:36.546958 containerd[2003]: 2025-09-10 23:50:36.404 [INFO][4894] ipam/ipam.go 511: Trying affinity for 192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:36.546958 containerd[2003]: 2025-09-10 23:50:36.412 [INFO][4894] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:36.546958 containerd[2003]: 2025-09-10 23:50:36.423 [INFO][4894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:36.549371 containerd[2003]: 2025-09-10 23:50:36.424 [INFO][4894] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.192/26 handle="k8s-pod-network.b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" host="ip-172-31-28-68" Sep 10 23:50:36.549371 containerd[2003]: 2025-09-10 23:50:36.429 [INFO][4894] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d Sep 10 23:50:36.549371 containerd[2003]: 2025-09-10 23:50:36.450 [INFO][4894] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.192/26 handle="k8s-pod-network.b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" host="ip-172-31-28-68" Sep 10 23:50:36.549371 containerd[2003]: 2025-09-10 23:50:36.467 [INFO][4894] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.194/26] block=192.168.46.192/26 handle="k8s-pod-network.b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" host="ip-172-31-28-68" Sep 10 23:50:36.549371 containerd[2003]: 2025-09-10 23:50:36.467 [INFO][4894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.194/26] handle="k8s-pod-network.b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" host="ip-172-31-28-68" Sep 10 23:50:36.549371 containerd[2003]: 2025-09-10 23:50:36.467 [INFO][4894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:50:36.549371 containerd[2003]: 2025-09-10 23:50:36.467 [INFO][4894] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.194/26] IPv6=[] ContainerID="b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" HandleID="k8s-pod-network.b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" Workload="ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0" Sep 10 23:50:36.551312 containerd[2003]: 2025-09-10 23:50:36.475 [INFO][4880] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" Namespace="calico-system" Pod="calico-kube-controllers-677c6fffd9-9nnvf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0", GenerateName:"calico-kube-controllers-677c6fffd9-", Namespace:"calico-system", SelfLink:"", UID:"e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 50, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677c6fffd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"", Pod:"calico-kube-controllers-677c6fffd9-9nnvf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75c27cb0947", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:36.552560 containerd[2003]: 2025-09-10 23:50:36.475 [INFO][4880] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.194/32] ContainerID="b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" Namespace="calico-system" Pod="calico-kube-controllers-677c6fffd9-9nnvf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0" Sep 10 23:50:36.552560 containerd[2003]: 2025-09-10 23:50:36.475 [INFO][4880] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75c27cb0947 ContainerID="b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" Namespace="calico-system" Pod="calico-kube-controllers-677c6fffd9-9nnvf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0" Sep 10 23:50:36.552560 containerd[2003]: 2025-09-10 23:50:36.489 [INFO][4880] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" Namespace="calico-system" Pod="calico-kube-controllers-677c6fffd9-9nnvf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0" Sep 10 23:50:36.552774 containerd[2003]: 2025-09-10 23:50:36.490 [INFO][4880] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" Namespace="calico-system" Pod="calico-kube-controllers-677c6fffd9-9nnvf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0", GenerateName:"calico-kube-controllers-677c6fffd9-", Namespace:"calico-system", SelfLink:"", UID:"e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 50, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677c6fffd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d", Pod:"calico-kube-controllers-677c6fffd9-9nnvf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75c27cb0947", MAC:"ba:98:09:41:4e:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:36.553046 containerd[2003]: 2025-09-10 23:50:36.516 [INFO][4880] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" Namespace="calico-system" Pod="calico-kube-controllers-677c6fffd9-9nnvf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--kube--controllers--677c6fffd9--9nnvf-eth0" Sep 10 23:50:36.629970 containerd[2003]: time="2025-09-10T23:50:36.629742813Z" level=info msg="connecting to shim b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d" address="unix:///run/containerd/s/1d5b66b461d22370841dd7c161a1ce19fb963133a74ad31ccc3ad5023f3af53f" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:50:36.722010 containerd[2003]: time="2025-09-10T23:50:36.721663390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:36.730703 containerd[2003]: time="2025-09-10T23:50:36.730518538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 10 23:50:36.735833 containerd[2003]: time="2025-09-10T23:50:36.732904018Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:36.739857 systemd[1]: Started cri-containerd-b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d.scope - libcontainer container b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d. Sep 10 23:50:36.753709 containerd[2003]: time="2025-09-10T23:50:36.753492430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:36.756753 containerd[2003]: time="2025-09-10T23:50:36.756679702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.822068105s" Sep 10 23:50:36.756753 containerd[2003]: time="2025-09-10T23:50:36.756741814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 10 23:50:36.769767 containerd[2003]: time="2025-09-10T23:50:36.769162738Z" level=info msg="CreateContainer within sandbox \"06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 23:50:36.789919 containerd[2003]: time="2025-09-10T23:50:36.789847798Z" level=info msg="Container e7bacebc181cd2367d3a9ed8d86b286732dd11238ae1e7ad415a06f1e05bd96f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:36.815289 containerd[2003]: time="2025-09-10T23:50:36.813997582Z" level=info msg="CreateContainer within sandbox \"06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e7bacebc181cd2367d3a9ed8d86b286732dd11238ae1e7ad415a06f1e05bd96f\"" Sep 10 23:50:36.818527 containerd[2003]: time="2025-09-10T23:50:36.818149018Z" level=info msg="StartContainer for \"e7bacebc181cd2367d3a9ed8d86b286732dd11238ae1e7ad415a06f1e05bd96f\"" Sep 10 23:50:36.826930 containerd[2003]: time="2025-09-10T23:50:36.826809178Z" level=info msg="connecting to shim e7bacebc181cd2367d3a9ed8d86b286732dd11238ae1e7ad415a06f1e05bd96f" address="unix:///run/containerd/s/ec7f47b09bb8c7487898b03c726d47d016bc6be05fc6f8de8bc269062a5dac85" protocol=ttrpc version=3 Sep 10 23:50:36.897802 systemd[1]: Started cri-containerd-e7bacebc181cd2367d3a9ed8d86b286732dd11238ae1e7ad415a06f1e05bd96f.scope - libcontainer container e7bacebc181cd2367d3a9ed8d86b286732dd11238ae1e7ad415a06f1e05bd96f. Sep 10 23:50:36.947648 systemd-networkd[1914]: vxlan.calico: Link UP Sep 10 23:50:36.947671 systemd-networkd[1914]: vxlan.calico: Gained carrier Sep 10 23:50:37.014455 containerd[2003]: time="2025-09-10T23:50:37.014177683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677c6fffd9-9nnvf,Uid:e160ddd3-6a46-4cd7-9f45-54a7ce4bf1eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d\"" Sep 10 23:50:37.024335 containerd[2003]: time="2025-09-10T23:50:37.023076103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 23:50:37.040434 (udev-worker)[4606]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:50:37.140869 containerd[2003]: time="2025-09-10T23:50:37.140794820Z" level=info msg="StartContainer for \"e7bacebc181cd2367d3a9ed8d86b286732dd11238ae1e7ad415a06f1e05bd96f\" returns successfully" Sep 10 23:50:37.163796 containerd[2003]: time="2025-09-10T23:50:37.163621508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f585bb-8qbrf,Uid:555f34ed-e16f-48e0-b1d2-b0ae99507519,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:50:37.165907 containerd[2003]: time="2025-09-10T23:50:37.164181440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wdxn4,Uid:136f9386-4257-4e17-9cdf-6a6536307346,Namespace:calico-system,Attempt:0,}" Sep 10 23:50:37.493891 systemd-networkd[1914]: cali1704bd77da6: Link UP Sep 10 23:50:37.496519 systemd-networkd[1914]: cali1704bd77da6: Gained carrier Sep 10 23:50:37.545879 containerd[2003]: 2025-09-10 23:50:37.298 [INFO][5014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0 csi-node-driver- calico-system 136f9386-4257-4e17-9cdf-6a6536307346 690 0 2025-09-10 23:50:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-68 csi-node-driver-wdxn4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1704bd77da6 [] [] }} ContainerID="50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" Namespace="calico-system" Pod="csi-node-driver-wdxn4" WorkloadEndpoint="ip--172--31--28--68-k8s-csi--node--driver--wdxn4-" Sep 10 23:50:37.545879 containerd[2003]: 2025-09-10 23:50:37.300 [INFO][5014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" Namespace="calico-system" Pod="csi-node-driver-wdxn4" WorkloadEndpoint="ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0" Sep 10 23:50:37.545879 containerd[2003]: 2025-09-10 23:50:37.395 [INFO][5044] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" HandleID="k8s-pod-network.50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" Workload="ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0" Sep 10 23:50:37.546694 containerd[2003]: 2025-09-10 23:50:37.395 [INFO][5044] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" HandleID="k8s-pod-network.50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" Workload="ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000327920), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-68", "pod":"csi-node-driver-wdxn4", "timestamp":"2025-09-10 23:50:37.395380137 +0000 UTC"}, Hostname:"ip-172-31-28-68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:50:37.546694 containerd[2003]: 2025-09-10 23:50:37.395 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:50:37.546694 containerd[2003]: 2025-09-10 23:50:37.395 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:50:37.546694 containerd[2003]: 2025-09-10 23:50:37.395 [INFO][5044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-68' Sep 10 23:50:37.546694 containerd[2003]: 2025-09-10 23:50:37.418 [INFO][5044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" host="ip-172-31-28-68" Sep 10 23:50:37.546694 containerd[2003]: 2025-09-10 23:50:37.429 [INFO][5044] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-68" Sep 10 23:50:37.546694 containerd[2003]: 2025-09-10 23:50:37.438 [INFO][5044] ipam/ipam.go 511: Trying affinity for 192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:37.546694 containerd[2003]: 2025-09-10 23:50:37.441 [INFO][5044] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:37.546694 containerd[2003]: 2025-09-10 23:50:37.446 [INFO][5044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:37.546694 containerd[2003]: 2025-09-10 23:50:37.446 [INFO][5044] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.192/26 handle="k8s-pod-network.50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" host="ip-172-31-28-68" Sep 10 23:50:37.547242 containerd[2003]: 2025-09-10 23:50:37.449 [INFO][5044] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8 Sep 10 23:50:37.547242 containerd[2003]: 2025-09-10 23:50:37.456 [INFO][5044] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.192/26 handle="k8s-pod-network.50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" host="ip-172-31-28-68" Sep 10 23:50:37.547242 containerd[2003]: 2025-09-10 23:50:37.474 [INFO][5044] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.195/26] block=192.168.46.192/26 handle="k8s-pod-network.50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" host="ip-172-31-28-68" Sep 10 23:50:37.547242 containerd[2003]: 2025-09-10 23:50:37.474 [INFO][5044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.195/26] handle="k8s-pod-network.50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" host="ip-172-31-28-68" Sep 10 23:50:37.547242 containerd[2003]: 2025-09-10 23:50:37.474 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:50:37.547242 containerd[2003]: 2025-09-10 23:50:37.475 [INFO][5044] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.195/26] IPv6=[] ContainerID="50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" HandleID="k8s-pod-network.50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" Workload="ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0" Sep 10 23:50:37.549011 containerd[2003]: 2025-09-10 23:50:37.481 [INFO][5014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" Namespace="calico-system" Pod="csi-node-driver-wdxn4" WorkloadEndpoint="ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"136f9386-4257-4e17-9cdf-6a6536307346", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 50, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"", Pod:"csi-node-driver-wdxn4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1704bd77da6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:37.550672 containerd[2003]: 2025-09-10 23:50:37.482 [INFO][5014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.195/32] ContainerID="50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" Namespace="calico-system" Pod="csi-node-driver-wdxn4" WorkloadEndpoint="ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0" Sep 10 23:50:37.550672 containerd[2003]: 2025-09-10 23:50:37.482 [INFO][5014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1704bd77da6 ContainerID="50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" Namespace="calico-system" Pod="csi-node-driver-wdxn4" WorkloadEndpoint="ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0" Sep 10 23:50:37.550672 containerd[2003]: 2025-09-10 23:50:37.500 [INFO][5014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" Namespace="calico-system" Pod="csi-node-driver-wdxn4" WorkloadEndpoint="ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0" Sep 10 23:50:37.550854 containerd[2003]: 2025-09-10 23:50:37.505 [INFO][5014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" Namespace="calico-system" Pod="csi-node-driver-wdxn4" WorkloadEndpoint="ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"136f9386-4257-4e17-9cdf-6a6536307346", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 50, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8", Pod:"csi-node-driver-wdxn4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1704bd77da6", MAC:"ea:20:20:4e:bb:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:37.550995 containerd[2003]: 2025-09-10 23:50:37.535 [INFO][5014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" Namespace="calico-system" Pod="csi-node-driver-wdxn4" WorkloadEndpoint="ip--172--31--28--68-k8s-csi--node--driver--wdxn4-eth0" Sep 10 23:50:37.623083 containerd[2003]: time="2025-09-10T23:50:37.622998622Z" level=info msg="connecting to shim 50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8" address="unix:///run/containerd/s/ee98e02840a34516de4cf3eaf3aac0819f54e55f0e9ab76c676869e8bd93b0bb" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:50:37.698358 systemd[1]: Started cri-containerd-50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8.scope - libcontainer container 50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8. Sep 10 23:50:37.716519 systemd-networkd[1914]: cali0ccf8f74456: Link UP Sep 10 23:50:37.723733 systemd-networkd[1914]: cali0ccf8f74456: Gained carrier Sep 10 23:50:37.773712 containerd[2003]: 2025-09-10 23:50:37.318 [INFO][5017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0 calico-apiserver-5d84f585bb- calico-apiserver 555f34ed-e16f-48e0-b1d2-b0ae99507519 823 0 2025-09-10 23:50:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d84f585bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-68 calico-apiserver-5d84f585bb-8qbrf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ccf8f74456 [] [] }} ContainerID="1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-8qbrf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-" Sep 10 23:50:37.773712 containerd[2003]: 2025-09-10 23:50:37.319 [INFO][5017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-8qbrf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0" Sep 10 23:50:37.773712 containerd[2003]: 2025-09-10 23:50:37.436 [INFO][5050] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" HandleID="k8s-pod-network.1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" Workload="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0" Sep 10 23:50:37.774456 containerd[2003]: 2025-09-10 23:50:37.437 [INFO][5050] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" HandleID="k8s-pod-network.1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" Workload="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032a510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-68", "pod":"calico-apiserver-5d84f585bb-8qbrf", "timestamp":"2025-09-10 23:50:37.435839157 +0000 UTC"}, Hostname:"ip-172-31-28-68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:50:37.774456 containerd[2003]: 2025-09-10 23:50:37.437 [INFO][5050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:50:37.774456 containerd[2003]: 2025-09-10 23:50:37.474 [INFO][5050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:50:37.774456 containerd[2003]: 2025-09-10 23:50:37.475 [INFO][5050] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-68' Sep 10 23:50:37.774456 containerd[2003]: 2025-09-10 23:50:37.520 [INFO][5050] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" host="ip-172-31-28-68" Sep 10 23:50:37.774456 containerd[2003]: 2025-09-10 23:50:37.542 [INFO][5050] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-68" Sep 10 23:50:37.774456 containerd[2003]: 2025-09-10 23:50:37.556 [INFO][5050] ipam/ipam.go 511: Trying affinity for 192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:37.774456 containerd[2003]: 2025-09-10 23:50:37.563 [INFO][5050] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:37.774456 containerd[2003]: 2025-09-10 23:50:37.570 [INFO][5050] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:37.774927 containerd[2003]: 2025-09-10 23:50:37.570 [INFO][5050] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.192/26 handle="k8s-pod-network.1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" host="ip-172-31-28-68" Sep 10 23:50:37.774927 containerd[2003]: 2025-09-10 23:50:37.575 [INFO][5050] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e Sep 10 23:50:37.774927 containerd[2003]: 2025-09-10 23:50:37.599 [INFO][5050] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.192/26 handle="k8s-pod-network.1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" host="ip-172-31-28-68" Sep 10 23:50:37.774927 containerd[2003]: 2025-09-10 23:50:37.671 [INFO][5050] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.196/26] block=192.168.46.192/26 handle="k8s-pod-network.1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" host="ip-172-31-28-68" Sep 10 23:50:37.774927 containerd[2003]: 2025-09-10 23:50:37.672 [INFO][5050] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.196/26] handle="k8s-pod-network.1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" host="ip-172-31-28-68" Sep 10 23:50:37.774927 containerd[2003]: 2025-09-10 23:50:37.672 [INFO][5050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:50:37.774927 containerd[2003]: 2025-09-10 23:50:37.672 [INFO][5050] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.196/26] IPv6=[] ContainerID="1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" HandleID="k8s-pod-network.1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" Workload="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0" Sep 10 23:50:37.775238 containerd[2003]: 2025-09-10 23:50:37.695 [INFO][5017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-8qbrf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0", GenerateName:"calico-apiserver-5d84f585bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"555f34ed-e16f-48e0-b1d2-b0ae99507519", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 50, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d84f585bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"", Pod:"calico-apiserver-5d84f585bb-8qbrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ccf8f74456", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:37.775393 containerd[2003]: 2025-09-10 23:50:37.695 [INFO][5017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.196/32] ContainerID="1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-8qbrf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0" Sep 10 23:50:37.775393 containerd[2003]: 2025-09-10 23:50:37.696 [INFO][5017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ccf8f74456 ContainerID="1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-8qbrf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0" Sep 10 23:50:37.775393 containerd[2003]: 2025-09-10 23:50:37.722 [INFO][5017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-8qbrf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0" Sep 10 23:50:37.775540 containerd[2003]: 2025-09-10 23:50:37.726 [INFO][5017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-8qbrf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0", GenerateName:"calico-apiserver-5d84f585bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"555f34ed-e16f-48e0-b1d2-b0ae99507519", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 50, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d84f585bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e", Pod:"calico-apiserver-5d84f585bb-8qbrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ccf8f74456", MAC:"36:5e:65:c9:9c:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:37.775656 containerd[2003]: 2025-09-10 23:50:37.763 [INFO][5017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-8qbrf" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--8qbrf-eth0" Sep 10 23:50:37.845334 containerd[2003]: time="2025-09-10T23:50:37.844693223Z" level=info msg="connecting to shim 1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e" address="unix:///run/containerd/s/0d7a039f95926ca49364d5d1f4365b276a881a67abaf81c8be0fcd045e7b3c90" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:50:37.930588 systemd[1]: Started cri-containerd-1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e.scope - libcontainer container 1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e. Sep 10 23:50:38.114221 containerd[2003]: time="2025-09-10T23:50:38.114148929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wdxn4,Uid:136f9386-4257-4e17-9cdf-6a6536307346,Namespace:calico-system,Attempt:0,} returns sandbox id \"50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8\"" Sep 10 23:50:38.164280 containerd[2003]: time="2025-09-10T23:50:38.164042433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-r2cst,Uid:2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed,Namespace:calico-system,Attempt:0,}" Sep 10 23:50:38.200669 systemd-networkd[1914]: cali75c27cb0947: Gained IPv6LL Sep 10 23:50:38.440240 containerd[2003]: time="2025-09-10T23:50:38.439990078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f585bb-8qbrf,Uid:555f34ed-e16f-48e0-b1d2-b0ae99507519,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e\"" Sep 10 23:50:38.704670 systemd-networkd[1914]: cali1fd06c5cf2d: Link UP Sep 10 23:50:38.709246 systemd-networkd[1914]: cali1fd06c5cf2d: Gained carrier Sep 10 23:50:38.712472 systemd-networkd[1914]: vxlan.calico: Gained IPv6LL Sep 10 23:50:38.761406 containerd[2003]: 2025-09-10 23:50:38.318 [INFO][5169] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0 goldmane-54d579b49d- calico-system 2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed 827 0 2025-09-10 23:50:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-68 goldmane-54d579b49d-r2cst eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1fd06c5cf2d [] [] }} ContainerID="1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" Namespace="calico-system" Pod="goldmane-54d579b49d-r2cst" WorkloadEndpoint="ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-" Sep 10 23:50:38.761406 containerd[2003]: 2025-09-10 23:50:38.319 [INFO][5169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" Namespace="calico-system" Pod="goldmane-54d579b49d-r2cst" WorkloadEndpoint="ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0" Sep 10 23:50:38.761406 containerd[2003]: 2025-09-10 23:50:38.525 [INFO][5187] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" HandleID="k8s-pod-network.1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" Workload="ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0" Sep 10 23:50:38.761958 containerd[2003]: 2025-09-10 23:50:38.528 [INFO][5187] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" HandleID="k8s-pod-network.1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" Workload="ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103900), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-68", "pod":"goldmane-54d579b49d-r2cst", "timestamp":"2025-09-10 23:50:38.524981363 +0000 UTC"}, Hostname:"ip-172-31-28-68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:50:38.761958 containerd[2003]: 2025-09-10 23:50:38.528 [INFO][5187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:50:38.761958 containerd[2003]: 2025-09-10 23:50:38.528 [INFO][5187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:50:38.761958 containerd[2003]: 2025-09-10 23:50:38.528 [INFO][5187] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-68' Sep 10 23:50:38.761958 containerd[2003]: 2025-09-10 23:50:38.562 [INFO][5187] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" host="ip-172-31-28-68" Sep 10 23:50:38.761958 containerd[2003]: 2025-09-10 23:50:38.593 [INFO][5187] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-68" Sep 10 23:50:38.761958 containerd[2003]: 2025-09-10 23:50:38.610 [INFO][5187] ipam/ipam.go 511: Trying affinity for 192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:38.761958 containerd[2003]: 2025-09-10 23:50:38.618 [INFO][5187] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:38.761958 containerd[2003]: 2025-09-10 23:50:38.633 [INFO][5187] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:38.762462 containerd[2003]: 2025-09-10 23:50:38.633 [INFO][5187] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.192/26 handle="k8s-pod-network.1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" host="ip-172-31-28-68" Sep 10 23:50:38.762462 containerd[2003]: 2025-09-10 23:50:38.643 [INFO][5187] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017 Sep 10 23:50:38.762462 containerd[2003]: 2025-09-10 23:50:38.659 [INFO][5187] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.192/26 handle="k8s-pod-network.1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" host="ip-172-31-28-68" Sep 10 23:50:38.762462 containerd[2003]: 2025-09-10 23:50:38.682 [INFO][5187] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.197/26] block=192.168.46.192/26 handle="k8s-pod-network.1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" host="ip-172-31-28-68" Sep 10 23:50:38.762462 containerd[2003]: 2025-09-10 23:50:38.682 [INFO][5187] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.197/26] handle="k8s-pod-network.1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" host="ip-172-31-28-68" Sep 10 23:50:38.762462 containerd[2003]: 2025-09-10 23:50:38.682 [INFO][5187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:50:38.762462 containerd[2003]: 2025-09-10 23:50:38.682 [INFO][5187] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.197/26] IPv6=[] ContainerID="1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" HandleID="k8s-pod-network.1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" Workload="ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0" Sep 10 23:50:38.762801 containerd[2003]: 2025-09-10 23:50:38.689 [INFO][5169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" Namespace="calico-system" Pod="goldmane-54d579b49d-r2cst" WorkloadEndpoint="ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"", Pod:"goldmane-54d579b49d-r2cst", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1fd06c5cf2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:38.762801 containerd[2003]: 2025-09-10 23:50:38.689 [INFO][5169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.197/32] ContainerID="1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" Namespace="calico-system" Pod="goldmane-54d579b49d-r2cst" WorkloadEndpoint="ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0" Sep 10 23:50:38.762976 containerd[2003]: 2025-09-10 23:50:38.690 [INFO][5169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1fd06c5cf2d ContainerID="1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" Namespace="calico-system" Pod="goldmane-54d579b49d-r2cst" WorkloadEndpoint="ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0" Sep 10 23:50:38.762976 containerd[2003]: 2025-09-10 23:50:38.713 [INFO][5169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" Namespace="calico-system" Pod="goldmane-54d579b49d-r2cst" WorkloadEndpoint="ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0" Sep 10 23:50:38.763078 containerd[2003]: 2025-09-10 23:50:38.715 [INFO][5169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" Namespace="calico-system" Pod="goldmane-54d579b49d-r2cst" WorkloadEndpoint="ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017", Pod:"goldmane-54d579b49d-r2cst", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1fd06c5cf2d", MAC:"46:21:e3:68:12:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:38.763204 containerd[2003]: 2025-09-10 23:50:38.751 [INFO][5169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" Namespace="calico-system" Pod="goldmane-54d579b49d-r2cst" WorkloadEndpoint="ip--172--31--28--68-k8s-goldmane--54d579b49d--r2cst-eth0" Sep 10 23:50:38.866782 containerd[2003]: time="2025-09-10T23:50:38.866596260Z" level=info msg="connecting to shim 1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017" address="unix:///run/containerd/s/18bdfeb4bf6a79d0c2b3a5eefd16d22035c532f3a8d95082cfb4c455e8cf3b52" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:50:38.963779 systemd[1]: Started cri-containerd-1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017.scope - libcontainer container 1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017. Sep 10 23:50:39.096589 systemd-networkd[1914]: cali1704bd77da6: Gained IPv6LL Sep 10 23:50:39.109397 containerd[2003]: time="2025-09-10T23:50:39.109227226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-r2cst,Uid:2b3bf6ae-6a35-4ecc-96ce-ee9e408d77ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017\"" Sep 10 23:50:39.165701 containerd[2003]: time="2025-09-10T23:50:39.165522574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f585bb-jrjmb,Uid:95691e3f-1910-43a6-be81-55198cb86931,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:50:39.167499 containerd[2003]: time="2025-09-10T23:50:39.167407054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7n5j,Uid:6accb33c-2d9a-40dc-89a3-62a9683dfac4,Namespace:kube-system,Attempt:0,}" Sep 10 23:50:39.171132 containerd[2003]: time="2025-09-10T23:50:39.170092738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v8tt6,Uid:f3070b0e-e694-4250-800c-4411250bc48a,Namespace:kube-system,Attempt:0,}" Sep 10 23:50:39.609601 systemd-networkd[1914]: cali0ccf8f74456: Gained IPv6LL Sep 10 23:50:39.639747 systemd-networkd[1914]: cali7406a2474f1: Link UP Sep 10 23:50:39.640499 systemd-networkd[1914]: cali7406a2474f1: Gained carrier Sep 10 23:50:39.688180 containerd[2003]: 2025-09-10 23:50:39.338 [INFO][5284] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0 coredns-668d6bf9bc- kube-system 6accb33c-2d9a-40dc-89a3-62a9683dfac4 824 0 2025-09-10 23:49:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-68 coredns-668d6bf9bc-v7n5j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7406a2474f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7n5j" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-" Sep 10 23:50:39.688180 containerd[2003]: 2025-09-10 23:50:39.338 [INFO][5284] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7n5j" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0" Sep 10 23:50:39.688180 containerd[2003]: 2025-09-10 23:50:39.501 [INFO][5321] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" HandleID="k8s-pod-network.2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" Workload="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0" Sep 10 23:50:39.688620 containerd[2003]: 2025-09-10 23:50:39.502 [INFO][5321] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" HandleID="k8s-pod-network.2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" Workload="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000308030), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-68", "pod":"coredns-668d6bf9bc-v7n5j", "timestamp":"2025-09-10 23:50:39.50181822 +0000 UTC"}, Hostname:"ip-172-31-28-68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:50:39.688620 containerd[2003]: 2025-09-10 23:50:39.502 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:50:39.688620 containerd[2003]: 2025-09-10 23:50:39.502 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:50:39.688620 containerd[2003]: 2025-09-10 23:50:39.502 [INFO][5321] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-68' Sep 10 23:50:39.688620 containerd[2003]: 2025-09-10 23:50:39.527 [INFO][5321] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" host="ip-172-31-28-68" Sep 10 23:50:39.688620 containerd[2003]: 2025-09-10 23:50:39.545 [INFO][5321] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-68" Sep 10 23:50:39.688620 containerd[2003]: 2025-09-10 23:50:39.562 [INFO][5321] ipam/ipam.go 511: Trying affinity for 192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:39.688620 containerd[2003]: 2025-09-10 23:50:39.571 [INFO][5321] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:39.688620 containerd[2003]: 2025-09-10 23:50:39.583 [INFO][5321] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:39.688620 containerd[2003]: 2025-09-10 23:50:39.585 [INFO][5321] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.192/26 handle="k8s-pod-network.2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" host="ip-172-31-28-68" Sep 10 23:50:39.691500 containerd[2003]: 2025-09-10 23:50:39.588 [INFO][5321] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a Sep 10 23:50:39.691500 containerd[2003]: 2025-09-10 23:50:39.603 [INFO][5321] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.192/26 handle="k8s-pod-network.2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" host="ip-172-31-28-68" Sep 10 23:50:39.691500 containerd[2003]: 2025-09-10 23:50:39.620 [INFO][5321] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.198/26] block=192.168.46.192/26 handle="k8s-pod-network.2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" host="ip-172-31-28-68" Sep 10 23:50:39.691500 containerd[2003]: 2025-09-10 23:50:39.620 [INFO][5321] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.198/26] handle="k8s-pod-network.2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" host="ip-172-31-28-68" Sep 10 23:50:39.691500 containerd[2003]: 2025-09-10 23:50:39.620 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:50:39.691500 containerd[2003]: 2025-09-10 23:50:39.620 [INFO][5321] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.198/26] IPv6=[] ContainerID="2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" HandleID="k8s-pod-network.2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" Workload="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0" Sep 10 23:50:39.692611 containerd[2003]: 2025-09-10 23:50:39.630 [INFO][5284] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7n5j" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6accb33c-2d9a-40dc-89a3-62a9683dfac4", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 49, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"", Pod:"coredns-668d6bf9bc-v7n5j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7406a2474f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:39.692611 containerd[2003]: 2025-09-10 23:50:39.631 [INFO][5284] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.198/32] ContainerID="2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7n5j" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0" Sep 10 23:50:39.692611 containerd[2003]: 2025-09-10 23:50:39.631 [INFO][5284] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7406a2474f1 ContainerID="2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7n5j" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0" Sep 10 23:50:39.692611 containerd[2003]: 2025-09-10 23:50:39.635 [INFO][5284] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7n5j" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0" Sep 10 23:50:39.692611 containerd[2003]: 2025-09-10 23:50:39.635 [INFO][5284] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7n5j" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6accb33c-2d9a-40dc-89a3-62a9683dfac4", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 49, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a", Pod:"coredns-668d6bf9bc-v7n5j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7406a2474f1", MAC:"1a:bc:c8:63:b4:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:39.692611 containerd[2003]: 2025-09-10 23:50:39.662 [INFO][5284] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7n5j" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v7n5j-eth0" Sep 10 23:50:39.791893 containerd[2003]: time="2025-09-10T23:50:39.789902209Z" level=info msg="connecting to shim 2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a" address="unix:///run/containerd/s/f71bc632a62deee6e99f84911aab730450ee44d27ef54e2d28bce12a00798109" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:50:39.858411 systemd-networkd[1914]: cali0e62bbdb0be: Link UP Sep 10 23:50:39.861152 systemd-networkd[1914]: cali0e62bbdb0be: Gained carrier Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.417 [INFO][5283] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0 calico-apiserver-5d84f585bb- calico-apiserver 95691e3f-1910-43a6-be81-55198cb86931 826 0 2025-09-10 23:50:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d84f585bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-68 calico-apiserver-5d84f585bb-jrjmb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0e62bbdb0be [] [] }} ContainerID="d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-jrjmb" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-" Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.420 [INFO][5283] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-jrjmb" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0" Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.563 [INFO][5329] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" HandleID="k8s-pod-network.d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" Workload="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0" Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.565 [INFO][5329] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" HandleID="k8s-pod-network.d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" Workload="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031da60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-68", "pod":"calico-apiserver-5d84f585bb-jrjmb", "timestamp":"2025-09-10 23:50:39.563310204 +0000 UTC"}, Hostname:"ip-172-31-28-68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.565 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.620 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.620 [INFO][5329] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-68' Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.655 [INFO][5329] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" host="ip-172-31-28-68" Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.679 [INFO][5329] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-68" Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.701 [INFO][5329] ipam/ipam.go 511: Trying affinity for 192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.711 [INFO][5329] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.721 [INFO][5329] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.723 [INFO][5329] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.192/26 handle="k8s-pod-network.d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" host="ip-172-31-28-68" Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.730 [INFO][5329] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167 Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.747 [INFO][5329] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.192/26 handle="k8s-pod-network.d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" host="ip-172-31-28-68" Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.792 [INFO][5329] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.199/26] block=192.168.46.192/26 handle="k8s-pod-network.d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" host="ip-172-31-28-68" Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.792 [INFO][5329] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.199/26] handle="k8s-pod-network.d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" host="ip-172-31-28-68" Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.792 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:50:39.944611 containerd[2003]: 2025-09-10 23:50:39.792 [INFO][5329] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.199/26] IPv6=[] ContainerID="d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" HandleID="k8s-pod-network.d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" Workload="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0" Sep 10 23:50:39.946837 containerd[2003]: 2025-09-10 23:50:39.817 [INFO][5283] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-jrjmb" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0", GenerateName:"calico-apiserver-5d84f585bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"95691e3f-1910-43a6-be81-55198cb86931", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 50, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d84f585bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"", Pod:"calico-apiserver-5d84f585bb-jrjmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e62bbdb0be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:39.946837 containerd[2003]: 2025-09-10 23:50:39.818 [INFO][5283] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.199/32] ContainerID="d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-jrjmb" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0" Sep 10 23:50:39.946837 containerd[2003]: 2025-09-10 23:50:39.820 [INFO][5283] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e62bbdb0be ContainerID="d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-jrjmb" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0" Sep 10 23:50:39.946837 containerd[2003]: 2025-09-10 23:50:39.870 [INFO][5283] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-jrjmb" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0" Sep 10 23:50:39.946837 containerd[2003]: 2025-09-10 23:50:39.878 [INFO][5283] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-jrjmb" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0", GenerateName:"calico-apiserver-5d84f585bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"95691e3f-1910-43a6-be81-55198cb86931", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 50, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d84f585bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167", Pod:"calico-apiserver-5d84f585bb-jrjmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e62bbdb0be", MAC:"9a:de:43:cb:23:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:39.946837 containerd[2003]: 2025-09-10 23:50:39.920 [INFO][5283] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f585bb-jrjmb" WorkloadEndpoint="ip--172--31--28--68-k8s-calico--apiserver--5d84f585bb--jrjmb-eth0" Sep 10 23:50:39.970006 systemd[1]: Started cri-containerd-2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a.scope - libcontainer container 2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a. Sep 10 23:50:40.068565 systemd-networkd[1914]: cali5c2defb7a9d: Link UP Sep 10 23:50:40.080868 systemd-networkd[1914]: cali5c2defb7a9d: Gained carrier Sep 10 23:50:40.115578 containerd[2003]: time="2025-09-10T23:50:40.115420067Z" level=info msg="connecting to shim d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167" address="unix:///run/containerd/s/c14d5da2a8c0e10aa10845fd34e4fd9eb64178e5ab044af50ba67310054f8d35" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:50:40.184653 systemd-networkd[1914]: cali1fd06c5cf2d: Gained IPv6LL Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.444 [INFO][5295] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0 coredns-668d6bf9bc- kube-system f3070b0e-e694-4250-800c-4411250bc48a 822 0 2025-09-10 23:49:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-68 coredns-668d6bf9bc-v8tt6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5c2defb7a9d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8tt6" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-" Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.446 [INFO][5295] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8tt6" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0" Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.571 [INFO][5334] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" HandleID="k8s-pod-network.74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" Workload="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0" Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.572 [INFO][5334] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" HandleID="k8s-pod-network.74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" Workload="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003452d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-68", "pod":"coredns-668d6bf9bc-v8tt6", "timestamp":"2025-09-10 23:50:39.571962588 +0000 UTC"}, Hostname:"ip-172-31-28-68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.572 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.794 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.794 [INFO][5334] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-68' Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.826 [INFO][5334] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" host="ip-172-31-28-68" Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.894 [INFO][5334] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-68" Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.916 [INFO][5334] ipam/ipam.go 511: Trying affinity for 192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.925 [INFO][5334] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.934 [INFO][5334] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.192/26 host="ip-172-31-28-68" Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.934 [INFO][5334] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.192/26 handle="k8s-pod-network.74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" host="ip-172-31-28-68" Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.948 [INFO][5334] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0 Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:39.983 [INFO][5334] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.192/26 handle="k8s-pod-network.74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" host="ip-172-31-28-68" Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:40.027 [INFO][5334] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.200/26] block=192.168.46.192/26 handle="k8s-pod-network.74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" host="ip-172-31-28-68" Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:40.029 [INFO][5334] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.200/26] handle="k8s-pod-network.74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" host="ip-172-31-28-68" Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:40.029 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:50:40.231674 containerd[2003]: 2025-09-10 23:50:40.031 [INFO][5334] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.200/26] IPv6=[] ContainerID="74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" HandleID="k8s-pod-network.74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" Workload="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0" Sep 10 23:50:40.233242 containerd[2003]: 2025-09-10 23:50:40.048 [INFO][5295] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8tt6" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f3070b0e-e694-4250-800c-4411250bc48a", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 49, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"", Pod:"coredns-668d6bf9bc-v8tt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c2defb7a9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:40.233242 containerd[2003]: 2025-09-10 23:50:40.048 [INFO][5295] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.200/32] ContainerID="74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8tt6" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0" Sep 10 23:50:40.233242 containerd[2003]: 2025-09-10 23:50:40.048 [INFO][5295] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c2defb7a9d ContainerID="74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8tt6" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0" Sep 10 23:50:40.233242 containerd[2003]: 2025-09-10 23:50:40.099 [INFO][5295] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8tt6" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0" Sep 10 23:50:40.233242 containerd[2003]: 2025-09-10 23:50:40.112 [INFO][5295] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8tt6" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f3070b0e-e694-4250-800c-4411250bc48a", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 49, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-68", ContainerID:"74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0", Pod:"coredns-668d6bf9bc-v8tt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c2defb7a9d", MAC:"12:e6:26:25:d1:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:50:40.233242 containerd[2003]: 2025-09-10 23:50:40.180 [INFO][5295] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8tt6" WorkloadEndpoint="ip--172--31--28--68-k8s-coredns--668d6bf9bc--v8tt6-eth0" Sep 10 23:50:40.269651 systemd[1]: Started cri-containerd-d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167.scope - libcontainer container d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167. Sep 10 23:50:40.408513 containerd[2003]: time="2025-09-10T23:50:40.407862384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7n5j,Uid:6accb33c-2d9a-40dc-89a3-62a9683dfac4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a\"" Sep 10 23:50:40.414024 containerd[2003]: time="2025-09-10T23:50:40.413946504Z" level=info msg="connecting to shim 74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0" address="unix:///run/containerd/s/022dd7fd233d51f4d8013157d71472863e34e069fa99654e5f885e3d6d866959" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:50:40.429207 containerd[2003]: time="2025-09-10T23:50:40.429038652Z" level=info msg="CreateContainer within sandbox \"2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:50:40.486370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508336113.mount: Deactivated successfully. Sep 10 23:50:40.511768 containerd[2003]: time="2025-09-10T23:50:40.511650433Z" level=info msg="Container d02ae90b796b94cd9e59ab892b546d30e70b49fa197f3bffb66a79e96174ada4: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:40.579586 systemd[1]: Started cri-containerd-74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0.scope - libcontainer container 74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0. Sep 10 23:50:40.587225 containerd[2003]: time="2025-09-10T23:50:40.587103937Z" level=info msg="CreateContainer within sandbox \"2ea869226a5cbb49829b7ceab913451e5b16771c3007c24bec9ede8efadec68a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d02ae90b796b94cd9e59ab892b546d30e70b49fa197f3bffb66a79e96174ada4\"" Sep 10 23:50:40.598039 containerd[2003]: time="2025-09-10T23:50:40.596497789Z" level=info msg="StartContainer for \"d02ae90b796b94cd9e59ab892b546d30e70b49fa197f3bffb66a79e96174ada4\"" Sep 10 23:50:40.602307 containerd[2003]: time="2025-09-10T23:50:40.601916185Z" level=info msg="connecting to shim d02ae90b796b94cd9e59ab892b546d30e70b49fa197f3bffb66a79e96174ada4" address="unix:///run/containerd/s/f71bc632a62deee6e99f84911aab730450ee44d27ef54e2d28bce12a00798109" protocol=ttrpc version=3 Sep 10 23:50:40.721174 containerd[2003]: time="2025-09-10T23:50:40.717230918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f585bb-jrjmb,Uid:95691e3f-1910-43a6-be81-55198cb86931,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167\"" Sep 10 23:50:40.742778 systemd[1]: Started cri-containerd-d02ae90b796b94cd9e59ab892b546d30e70b49fa197f3bffb66a79e96174ada4.scope - libcontainer container d02ae90b796b94cd9e59ab892b546d30e70b49fa197f3bffb66a79e96174ada4. Sep 10 23:50:40.792371 containerd[2003]: time="2025-09-10T23:50:40.792301886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v8tt6,Uid:f3070b0e-e694-4250-800c-4411250bc48a,Namespace:kube-system,Attempt:0,} returns sandbox id \"74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0\"" Sep 10 23:50:40.806829 containerd[2003]: time="2025-09-10T23:50:40.806765006Z" level=info msg="CreateContainer within sandbox \"74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:50:40.842338 containerd[2003]: time="2025-09-10T23:50:40.842242634Z" level=info msg="Container 6ed9d6f69822cffc4616ae14030a0b717624035e6972432ddd53fe909935c088: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:40.864687 containerd[2003]: time="2025-09-10T23:50:40.864594458Z" level=info msg="CreateContainer within sandbox \"74cfbc0ee5726a028bc2dba6bc351293ee43518511f3e9147de7d7f6590ea5c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ed9d6f69822cffc4616ae14030a0b717624035e6972432ddd53fe909935c088\"" Sep 10 23:50:40.867790 containerd[2003]: time="2025-09-10T23:50:40.867701198Z" level=info msg="StartContainer for \"6ed9d6f69822cffc4616ae14030a0b717624035e6972432ddd53fe909935c088\"" Sep 10 23:50:40.874644 containerd[2003]: time="2025-09-10T23:50:40.874113506Z" level=info msg="StartContainer for \"d02ae90b796b94cd9e59ab892b546d30e70b49fa197f3bffb66a79e96174ada4\" returns successfully" Sep 10 23:50:40.878664 containerd[2003]: time="2025-09-10T23:50:40.878510390Z" level=info msg="connecting to shim 6ed9d6f69822cffc4616ae14030a0b717624035e6972432ddd53fe909935c088" address="unix:///run/containerd/s/022dd7fd233d51f4d8013157d71472863e34e069fa99654e5f885e3d6d866959" protocol=ttrpc version=3 Sep 10 23:50:40.964892 systemd[1]: Started cri-containerd-6ed9d6f69822cffc4616ae14030a0b717624035e6972432ddd53fe909935c088.scope - libcontainer container 6ed9d6f69822cffc4616ae14030a0b717624035e6972432ddd53fe909935c088. Sep 10 23:50:41.114505 containerd[2003]: time="2025-09-10T23:50:41.114428112Z" level=info msg="StartContainer for \"6ed9d6f69822cffc4616ae14030a0b717624035e6972432ddd53fe909935c088\" returns successfully" Sep 10 23:50:41.272797 systemd-networkd[1914]: cali0e62bbdb0be: Gained IPv6LL Sep 10 23:50:41.465615 systemd-networkd[1914]: cali7406a2474f1: Gained IPv6LL Sep 10 23:50:41.694635 kubelet[3597]: I0910 23:50:41.694292 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v7n5j" podStartSLOduration=52.694229102 podStartE2EDuration="52.694229102s" podCreationTimestamp="2025-09-10 23:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:50:41.690796118 +0000 UTC m=+57.826306524" watchObservedRunningTime="2025-09-10 23:50:41.694229102 +0000 UTC m=+57.829739592" Sep 10 23:50:41.802756 kubelet[3597]: I0910 23:50:41.802642 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v8tt6" podStartSLOduration=52.802612287 podStartE2EDuration="52.802612287s" podCreationTimestamp="2025-09-10 23:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:50:41.801713943 +0000 UTC m=+57.937224493" watchObservedRunningTime="2025-09-10 23:50:41.802612287 +0000 UTC m=+57.938122657" Sep 10 23:50:41.848840 systemd-networkd[1914]: cali5c2defb7a9d: Gained IPv6LL Sep 10 23:50:41.994955 containerd[2003]: time="2025-09-10T23:50:41.994903372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:42.010728 containerd[2003]: time="2025-09-10T23:50:42.010622652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 10 23:50:42.013309 containerd[2003]: time="2025-09-10T23:50:42.013227072Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:42.023282 containerd[2003]: time="2025-09-10T23:50:42.023159148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:42.024103 containerd[2003]: time="2025-09-10T23:50:42.024030132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 5.000631637s" Sep 10 23:50:42.024103 containerd[2003]: time="2025-09-10T23:50:42.024090492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 10 23:50:42.028675 containerd[2003]: time="2025-09-10T23:50:42.028614396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 23:50:42.065235 containerd[2003]: time="2025-09-10T23:50:42.065074068Z" level=info msg="CreateContainer within sandbox \"b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 23:50:42.076236 containerd[2003]: time="2025-09-10T23:50:42.076167552Z" level=info msg="Container b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:42.085785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1219208372.mount: Deactivated successfully. Sep 10 23:50:42.099464 containerd[2003]: time="2025-09-10T23:50:42.099400248Z" level=info msg="CreateContainer within sandbox \"b4f5a97c07a69f48122b3e6677349338f1e8fa10107f2b13b00d0135d221be8d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8\"" Sep 10 23:50:42.101421 containerd[2003]: time="2025-09-10T23:50:42.101342508Z" level=info msg="StartContainer for \"b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8\"" Sep 10 23:50:42.105852 containerd[2003]: time="2025-09-10T23:50:42.105775968Z" level=info msg="connecting to shim b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8" address="unix:///run/containerd/s/1d5b66b461d22370841dd7c161a1ce19fb963133a74ad31ccc3ad5023f3af53f" protocol=ttrpc version=3 Sep 10 23:50:42.155682 systemd[1]: Started cri-containerd-b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8.scope - libcontainer container b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8. Sep 10 23:50:42.296749 containerd[2003]: time="2025-09-10T23:50:42.296606161Z" level=info msg="StartContainer for \"b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8\" returns successfully" Sep 10 23:50:42.715164 kubelet[3597]: I0910 23:50:42.715076 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-677c6fffd9-9nnvf" podStartSLOduration=24.71013055 podStartE2EDuration="29.715048791s" podCreationTimestamp="2025-09-10 23:50:13 +0000 UTC" firstStartedPulling="2025-09-10 23:50:37.022010755 +0000 UTC m=+53.157521089" lastFinishedPulling="2025-09-10 23:50:42.026928984 +0000 UTC m=+58.162439330" observedRunningTime="2025-09-10 23:50:42.712603131 +0000 UTC m=+58.848113501" watchObservedRunningTime="2025-09-10 23:50:42.715048791 +0000 UTC m=+58.850559137" Sep 10 23:50:42.767935 containerd[2003]: time="2025-09-10T23:50:42.767830924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8\" id:\"7db45ddf3b1ed84dfef1e58c06748000662b0149006c7b6639f2e969efa04024\" pid:5636 exited_at:{seconds:1757548242 nanos:764236492}" Sep 10 23:50:44.210489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644936541.mount: Deactivated successfully. Sep 10 23:50:44.235313 containerd[2003]: time="2025-09-10T23:50:44.234229887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:44.236723 containerd[2003]: time="2025-09-10T23:50:44.236674791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 10 23:50:44.238669 containerd[2003]: time="2025-09-10T23:50:44.238492587Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:44.247491 containerd[2003]: time="2025-09-10T23:50:44.247290831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:44.251087 containerd[2003]: time="2025-09-10T23:50:44.250990779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.222311283s" Sep 10 23:50:44.251087 containerd[2003]: time="2025-09-10T23:50:44.251077743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 10 23:50:44.256408 containerd[2003]: time="2025-09-10T23:50:44.256332711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 23:50:44.264944 containerd[2003]: time="2025-09-10T23:50:44.264167535Z" level=info msg="CreateContainer within sandbox \"06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 23:50:44.280242 containerd[2003]: time="2025-09-10T23:50:44.279546867Z" level=info msg="Container 30f86fc6b9e80a4a792b0219d910200ff832dc130ee9e645ec84df0027587334: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:44.302068 containerd[2003]: time="2025-09-10T23:50:44.302004783Z" level=info msg="CreateContainer within sandbox \"06fa81acb69d89f14e3511eeaa2c9494513c171fbea1b7a5bbc88f65adffc15e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"30f86fc6b9e80a4a792b0219d910200ff832dc130ee9e645ec84df0027587334\"" Sep 10 23:50:44.304796 containerd[2003]: time="2025-09-10T23:50:44.304732071Z" level=info msg="StartContainer for \"30f86fc6b9e80a4a792b0219d910200ff832dc130ee9e645ec84df0027587334\"" Sep 10 23:50:44.307103 containerd[2003]: time="2025-09-10T23:50:44.307028991Z" level=info msg="connecting to shim 30f86fc6b9e80a4a792b0219d910200ff832dc130ee9e645ec84df0027587334" address="unix:///run/containerd/s/ec7f47b09bb8c7487898b03c726d47d016bc6be05fc6f8de8bc269062a5dac85" protocol=ttrpc version=3 Sep 10 23:50:44.422698 ntpd[1968]: Listen normally on 8 vxlan.calico 192.168.46.192:123 Sep 10 23:50:44.425874 ntpd[1968]: 10 Sep 23:50:44 ntpd[1968]: Listen normally on 8 vxlan.calico 192.168.46.192:123 Sep 10 23:50:44.425874 ntpd[1968]: 10 Sep 23:50:44 ntpd[1968]: Listen normally on 9 cali30463567bdf [fe80::ecee:eeff:feee:eeee%4]:123 Sep 10 23:50:44.425874 ntpd[1968]: 10 Sep 23:50:44 ntpd[1968]: Listen normally on 10 cali75c27cb0947 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 10 23:50:44.425874 ntpd[1968]: 10 Sep 23:50:44 ntpd[1968]: Listen normally on 11 vxlan.calico [fe80::64f2:bff:fe7b:d989%6]:123 Sep 10 23:50:44.425874 ntpd[1968]: 10 Sep 23:50:44 ntpd[1968]: Listen normally on 12 cali1704bd77da6 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 10 23:50:44.425874 ntpd[1968]: 10 Sep 23:50:44 ntpd[1968]: Listen normally on 13 cali0ccf8f74456 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 10 23:50:44.425874 ntpd[1968]: 10 Sep 23:50:44 ntpd[1968]: Listen normally on 14 cali1fd06c5cf2d [fe80::ecee:eeff:feee:eeee%11]:123 Sep 10 23:50:44.422833 ntpd[1968]: Listen normally on 9 cali30463567bdf [fe80::ecee:eeff:feee:eeee%4]:123 Sep 10 23:50:44.422913 ntpd[1968]: Listen normally on 10 cali75c27cb0947 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 10 23:50:44.427885 ntpd[1968]: 10 Sep 23:50:44 ntpd[1968]: Listen normally on 15 cali7406a2474f1 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 10 23:50:44.427885 ntpd[1968]: 10 Sep 23:50:44 ntpd[1968]: Listen normally on 16 cali0e62bbdb0be [fe80::ecee:eeff:feee:eeee%13]:123 Sep 10 23:50:44.427885 ntpd[1968]: 10 Sep 23:50:44 ntpd[1968]: Listen normally on 17 cali5c2defb7a9d [fe80::ecee:eeff:feee:eeee%14]:123 Sep 10 23:50:44.422976 ntpd[1968]: Listen normally on 11 vxlan.calico [fe80::64f2:bff:fe7b:d989%6]:123 Sep 10 23:50:44.423040 ntpd[1968]: Listen normally on 12 cali1704bd77da6 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 10 23:50:44.423107 ntpd[1968]: Listen normally on 13 cali0ccf8f74456 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 10 23:50:44.423184 ntpd[1968]: Listen normally on 14 cali1fd06c5cf2d [fe80::ecee:eeff:feee:eeee%11]:123 Sep 10 23:50:44.427193 ntpd[1968]: Listen normally on 15 cali7406a2474f1 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 10 23:50:44.427406 ntpd[1968]: Listen normally on 16 cali0e62bbdb0be [fe80::ecee:eeff:feee:eeee%13]:123 Sep 10 23:50:44.427481 ntpd[1968]: Listen normally on 17 cali5c2defb7a9d [fe80::ecee:eeff:feee:eeee%14]:123 Sep 10 23:50:44.500695 systemd[1]: Started cri-containerd-30f86fc6b9e80a4a792b0219d910200ff832dc130ee9e645ec84df0027587334.scope - libcontainer container 30f86fc6b9e80a4a792b0219d910200ff832dc130ee9e645ec84df0027587334. Sep 10 23:50:44.617147 containerd[2003]: time="2025-09-10T23:50:44.617067233Z" level=info msg="StartContainer for \"30f86fc6b9e80a4a792b0219d910200ff832dc130ee9e645ec84df0027587334\" returns successfully" Sep 10 23:50:46.011146 containerd[2003]: time="2025-09-10T23:50:46.010972780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:46.013641 containerd[2003]: time="2025-09-10T23:50:46.013576300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 10 23:50:46.013858 containerd[2003]: time="2025-09-10T23:50:46.013806880Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:46.017518 containerd[2003]: time="2025-09-10T23:50:46.017432704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:46.019447 containerd[2003]: time="2025-09-10T23:50:46.018650092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.761930141s" Sep 10 23:50:46.019447 containerd[2003]: time="2025-09-10T23:50:46.018708604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 10 23:50:46.022309 containerd[2003]: time="2025-09-10T23:50:46.021687748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:50:46.025970 containerd[2003]: time="2025-09-10T23:50:46.025897960Z" level=info msg="CreateContainer within sandbox \"50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 23:50:46.043645 containerd[2003]: time="2025-09-10T23:50:46.042634912Z" level=info msg="Container 854fb2b2027368b89da6a9930566d9d0daf1371186312237cb926c9a7ed9a32a: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:46.149592 containerd[2003]: time="2025-09-10T23:50:46.149501633Z" level=info msg="CreateContainer within sandbox \"50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"854fb2b2027368b89da6a9930566d9d0daf1371186312237cb926c9a7ed9a32a\"" Sep 10 23:50:46.150575 containerd[2003]: time="2025-09-10T23:50:46.150441905Z" level=info msg="StartContainer for \"854fb2b2027368b89da6a9930566d9d0daf1371186312237cb926c9a7ed9a32a\"" Sep 10 23:50:46.154071 containerd[2003]: time="2025-09-10T23:50:46.153998873Z" level=info msg="connecting to shim 854fb2b2027368b89da6a9930566d9d0daf1371186312237cb926c9a7ed9a32a" address="unix:///run/containerd/s/ee98e02840a34516de4cf3eaf3aac0819f54e55f0e9ab76c676869e8bd93b0bb" protocol=ttrpc version=3 Sep 10 23:50:46.199568 systemd[1]: Started cri-containerd-854fb2b2027368b89da6a9930566d9d0daf1371186312237cb926c9a7ed9a32a.scope - libcontainer container 854fb2b2027368b89da6a9930566d9d0daf1371186312237cb926c9a7ed9a32a. Sep 10 23:50:46.303830 containerd[2003]: time="2025-09-10T23:50:46.303782525Z" level=info msg="StartContainer for \"854fb2b2027368b89da6a9930566d9d0daf1371186312237cb926c9a7ed9a32a\" returns successfully" Sep 10 23:50:49.942589 systemd[1]: Started sshd@9-172.31.28.68:22-139.178.68.195:44308.service - OpenSSH per-connection server daemon (139.178.68.195:44308). Sep 10 23:50:50.165375 sshd[5747]: Accepted publickey for core from 139.178.68.195 port 44308 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:50.170188 sshd-session[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:50.184156 systemd-logind[1975]: New session 10 of user core. Sep 10 23:50:50.192739 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 23:50:50.577432 sshd[5749]: Connection closed by 139.178.68.195 port 44308 Sep 10 23:50:50.576944 sshd-session[5747]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:50.589898 systemd[1]: sshd@9-172.31.28.68:22-139.178.68.195:44308.service: Deactivated successfully. Sep 10 23:50:50.596175 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 23:50:50.600669 systemd-logind[1975]: Session 10 logged out. Waiting for processes to exit. Sep 10 23:50:50.606633 systemd-logind[1975]: Removed session 10. Sep 10 23:50:51.091092 containerd[2003]: time="2025-09-10T23:50:51.091020597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:51.092866 containerd[2003]: time="2025-09-10T23:50:51.092484849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 10 23:50:51.094085 containerd[2003]: time="2025-09-10T23:50:51.094025937Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:51.097461 containerd[2003]: time="2025-09-10T23:50:51.097395777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:51.099551 containerd[2003]: time="2025-09-10T23:50:51.099489765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 5.076960313s" Sep 10 23:50:51.099741 containerd[2003]: time="2025-09-10T23:50:51.099711177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:50:51.105242 containerd[2003]: time="2025-09-10T23:50:51.105056901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 23:50:51.112793 containerd[2003]: time="2025-09-10T23:50:51.112384305Z" level=info msg="CreateContainer within sandbox \"1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:50:51.124561 containerd[2003]: time="2025-09-10T23:50:51.124446333Z" level=info msg="Container 31f0fce9a8df736fcba2d1f7fcd34112e2efc403504951cb70c5d6e82d7d159f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:51.154661 containerd[2003]: time="2025-09-10T23:50:51.154058061Z" level=info msg="CreateContainer within sandbox \"1863f0772bc1f89f7cb3a259bc0ff490c6a7e0218f038c630d4ebd3f4a035a4e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"31f0fce9a8df736fcba2d1f7fcd34112e2efc403504951cb70c5d6e82d7d159f\"" Sep 10 23:50:51.155970 containerd[2003]: time="2025-09-10T23:50:51.155318061Z" level=info msg="StartContainer for \"31f0fce9a8df736fcba2d1f7fcd34112e2efc403504951cb70c5d6e82d7d159f\"" Sep 10 23:50:51.159884 containerd[2003]: time="2025-09-10T23:50:51.159801321Z" level=info msg="connecting to shim 31f0fce9a8df736fcba2d1f7fcd34112e2efc403504951cb70c5d6e82d7d159f" address="unix:///run/containerd/s/0d7a039f95926ca49364d5d1f4365b276a881a67abaf81c8be0fcd045e7b3c90" protocol=ttrpc version=3 Sep 10 23:50:51.207656 systemd[1]: Started cri-containerd-31f0fce9a8df736fcba2d1f7fcd34112e2efc403504951cb70c5d6e82d7d159f.scope - libcontainer container 31f0fce9a8df736fcba2d1f7fcd34112e2efc403504951cb70c5d6e82d7d159f. Sep 10 23:50:51.304144 containerd[2003]: time="2025-09-10T23:50:51.303993082Z" level=info msg="StartContainer for \"31f0fce9a8df736fcba2d1f7fcd34112e2efc403504951cb70c5d6e82d7d159f\" returns successfully" Sep 10 23:50:51.753081 kubelet[3597]: I0910 23:50:51.752172 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-56d665d69-dr8hz" podStartSLOduration=9.430671266 podStartE2EDuration="18.752147136s" podCreationTimestamp="2025-09-10 23:50:33 +0000 UTC" firstStartedPulling="2025-09-10 23:50:34.933615753 +0000 UTC m=+51.069126111" lastFinishedPulling="2025-09-10 23:50:44.255091647 +0000 UTC m=+60.390601981" observedRunningTime="2025-09-10 23:50:44.722334113 +0000 UTC m=+60.857844591" watchObservedRunningTime="2025-09-10 23:50:51.752147136 +0000 UTC m=+67.887657482" Sep 10 23:50:52.736568 kubelet[3597]: I0910 23:50:52.736492 3597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:50:55.626171 systemd[1]: Started sshd@10-172.31.28.68:22-139.178.68.195:44320.service - OpenSSH per-connection server daemon (139.178.68.195:44320). Sep 10 23:50:55.885594 sshd[5812]: Accepted publickey for core from 139.178.68.195 port 44320 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:55.890650 sshd-session[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:55.912398 systemd-logind[1975]: New session 11 of user core. Sep 10 23:50:55.919848 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 23:50:56.278656 sshd[5814]: Connection closed by 139.178.68.195 port 44320 Sep 10 23:50:56.280704 sshd-session[5812]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:56.285191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515308630.mount: Deactivated successfully. Sep 10 23:50:56.294364 systemd[1]: sshd@10-172.31.28.68:22-139.178.68.195:44320.service: Deactivated successfully. Sep 10 23:50:56.300786 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 23:50:56.305515 systemd-logind[1975]: Session 11 logged out. Waiting for processes to exit. Sep 10 23:50:56.308590 systemd-logind[1975]: Removed session 11. Sep 10 23:50:57.084918 containerd[2003]: time="2025-09-10T23:50:57.084838239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:57.086830 containerd[2003]: time="2025-09-10T23:50:57.086574663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 10 23:50:57.088195 containerd[2003]: time="2025-09-10T23:50:57.088133979Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:57.094998 containerd[2003]: time="2025-09-10T23:50:57.094908471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:57.096441 containerd[2003]: time="2025-09-10T23:50:57.096369867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 5.99113677s" Sep 10 23:50:57.096883 containerd[2003]: time="2025-09-10T23:50:57.096441531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 10 23:50:57.099732 containerd[2003]: time="2025-09-10T23:50:57.099670959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:50:57.101503 containerd[2003]: time="2025-09-10T23:50:57.101434671Z" level=info msg="CreateContainer within sandbox \"1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 23:50:57.121282 containerd[2003]: time="2025-09-10T23:50:57.118585023Z" level=info msg="Container 987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:57.130707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4261284473.mount: Deactivated successfully. Sep 10 23:50:57.141138 containerd[2003]: time="2025-09-10T23:50:57.141040263Z" level=info msg="CreateContainer within sandbox \"1ae9bfab6e78ce13cca032f5a463e769cdd7fd68b097d7219e7023c880cda017\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10\"" Sep 10 23:50:57.142309 containerd[2003]: time="2025-09-10T23:50:57.142148127Z" level=info msg="StartContainer for \"987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10\"" Sep 10 23:50:57.147847 containerd[2003]: time="2025-09-10T23:50:57.147611907Z" level=info msg="connecting to shim 987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10" address="unix:///run/containerd/s/18bdfeb4bf6a79d0c2b3a5eefd16d22035c532f3a8d95082cfb4c455e8cf3b52" protocol=ttrpc version=3 Sep 10 23:50:57.190657 systemd[1]: Started cri-containerd-987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10.scope - libcontainer container 987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10. Sep 10 23:50:57.296117 containerd[2003]: time="2025-09-10T23:50:57.295809616Z" level=info msg="StartContainer for \"987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10\" returns successfully" Sep 10 23:50:57.491377 containerd[2003]: time="2025-09-10T23:50:57.490751981Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:57.492687 containerd[2003]: time="2025-09-10T23:50:57.492615677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 23:50:57.498229 containerd[2003]: time="2025-09-10T23:50:57.498115649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 398.385686ms" Sep 10 23:50:57.498229 containerd[2003]: time="2025-09-10T23:50:57.498211901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:50:57.500946 containerd[2003]: time="2025-09-10T23:50:57.500559929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 23:50:57.508305 containerd[2003]: time="2025-09-10T23:50:57.506901833Z" level=info msg="CreateContainer within sandbox \"d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:50:57.522841 containerd[2003]: time="2025-09-10T23:50:57.522745229Z" level=info msg="Container f3d1ea951e6520b881bd9c8c655e7aa73bb2c9f588f5d02f8798bf6065cd99d5: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:57.544709 containerd[2003]: time="2025-09-10T23:50:57.544492973Z" level=info msg="CreateContainer within sandbox \"d6da7a96a263c7ca75ea83ad69d6dec91b239392958a664e4cf4e0c016eee167\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f3d1ea951e6520b881bd9c8c655e7aa73bb2c9f588f5d02f8798bf6065cd99d5\"" Sep 10 23:50:57.546167 containerd[2003]: time="2025-09-10T23:50:57.546116717Z" level=info msg="StartContainer for \"f3d1ea951e6520b881bd9c8c655e7aa73bb2c9f588f5d02f8798bf6065cd99d5\"" Sep 10 23:50:57.550171 containerd[2003]: time="2025-09-10T23:50:57.550095425Z" level=info msg="connecting to shim f3d1ea951e6520b881bd9c8c655e7aa73bb2c9f588f5d02f8798bf6065cd99d5" address="unix:///run/containerd/s/c14d5da2a8c0e10aa10845fd34e4fd9eb64178e5ab044af50ba67310054f8d35" protocol=ttrpc version=3 Sep 10 23:50:57.597676 systemd[1]: Started cri-containerd-f3d1ea951e6520b881bd9c8c655e7aa73bb2c9f588f5d02f8798bf6065cd99d5.scope - libcontainer container f3d1ea951e6520b881bd9c8c655e7aa73bb2c9f588f5d02f8798bf6065cd99d5. Sep 10 23:50:57.700846 containerd[2003]: time="2025-09-10T23:50:57.700233126Z" level=info msg="StartContainer for \"f3d1ea951e6520b881bd9c8c655e7aa73bb2c9f588f5d02f8798bf6065cd99d5\" returns successfully" Sep 10 23:50:57.804307 kubelet[3597]: I0910 23:50:57.804171 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d84f585bb-8qbrf" podStartSLOduration=43.149983155 podStartE2EDuration="55.80414595s" podCreationTimestamp="2025-09-10 23:50:02 +0000 UTC" firstStartedPulling="2025-09-10 23:50:38.44754271 +0000 UTC m=+54.583053056" lastFinishedPulling="2025-09-10 23:50:51.101705517 +0000 UTC m=+67.237215851" observedRunningTime="2025-09-10 23:50:51.754756248 +0000 UTC m=+67.890266654" watchObservedRunningTime="2025-09-10 23:50:57.80414595 +0000 UTC m=+73.939656296" Sep 10 23:50:57.805904 kubelet[3597]: I0910 23:50:57.805076 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-r2cst" podStartSLOduration=25.828065401 podStartE2EDuration="43.805053834s" podCreationTimestamp="2025-09-10 23:50:14 +0000 UTC" firstStartedPulling="2025-09-10 23:50:39.121579822 +0000 UTC m=+55.257090192" lastFinishedPulling="2025-09-10 23:50:57.098568291 +0000 UTC m=+73.234078625" observedRunningTime="2025-09-10 23:50:57.804023358 +0000 UTC m=+73.939533836" watchObservedRunningTime="2025-09-10 23:50:57.805053834 +0000 UTC m=+73.940564180" Sep 10 23:50:57.870675 kubelet[3597]: I0910 23:50:57.870564 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d84f585bb-jrjmb" podStartSLOduration=39.107311176 podStartE2EDuration="55.870535711s" podCreationTimestamp="2025-09-10 23:50:02 +0000 UTC" firstStartedPulling="2025-09-10 23:50:40.736932002 +0000 UTC m=+56.872442360" lastFinishedPulling="2025-09-10 23:50:57.500156477 +0000 UTC m=+73.635666895" observedRunningTime="2025-09-10 23:50:57.868302871 +0000 UTC m=+74.003813337" watchObservedRunningTime="2025-09-10 23:50:57.870535711 +0000 UTC m=+74.006046057" Sep 10 23:50:58.115407 containerd[2003]: time="2025-09-10T23:50:58.115195612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10\" id:\"d82512b6ab35e505a39778b6e6b0da731fa54deb2d7a757292a225cfff506735\" pid:5914 exit_status:1 exited_at:{seconds:1757548258 nanos:113685976}" Sep 10 23:50:58.793280 kubelet[3597]: I0910 23:50:58.792536 3597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:50:59.191902 containerd[2003]: time="2025-09-10T23:50:59.191341901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10\" id:\"198c3fa3177c341a5dfa2febf6bcddd12fe3caa372fe70974d85cd9339efd8cc\" pid:5958 exit_status:1 exited_at:{seconds:1757548259 nanos:188983625}" Sep 10 23:50:59.301354 containerd[2003]: time="2025-09-10T23:50:59.300128046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:59.301531 containerd[2003]: time="2025-09-10T23:50:59.301361598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 10 23:50:59.302727 containerd[2003]: time="2025-09-10T23:50:59.302666946Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:59.308211 containerd[2003]: time="2025-09-10T23:50:59.308135778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:59.312556 containerd[2003]: time="2025-09-10T23:50:59.312479526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.811216457s" Sep 10 23:50:59.312556 containerd[2003]: time="2025-09-10T23:50:59.312553530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 10 23:50:59.320987 containerd[2003]: time="2025-09-10T23:50:59.320922534Z" level=info msg="CreateContainer within sandbox \"50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 23:50:59.337297 containerd[2003]: time="2025-09-10T23:50:59.336802386Z" level=info msg="Container ffd2e195b292032ade7472fc0bc2e1b5df95f361b199436b4e758d90601d12a2: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:59.364700 containerd[2003]: time="2025-09-10T23:50:59.364591914Z" level=info msg="CreateContainer within sandbox \"50c290d55754e955193a963eaaf1bbb46553cfa9cae657ce6cfbe734a5d728c8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ffd2e195b292032ade7472fc0bc2e1b5df95f361b199436b4e758d90601d12a2\"" Sep 10 23:50:59.367377 containerd[2003]: time="2025-09-10T23:50:59.367322466Z" level=info msg="StartContainer for \"ffd2e195b292032ade7472fc0bc2e1b5df95f361b199436b4e758d90601d12a2\"" Sep 10 23:50:59.373782 containerd[2003]: time="2025-09-10T23:50:59.373698078Z" level=info msg="connecting to shim ffd2e195b292032ade7472fc0bc2e1b5df95f361b199436b4e758d90601d12a2" address="unix:///run/containerd/s/ee98e02840a34516de4cf3eaf3aac0819f54e55f0e9ab76c676869e8bd93b0bb" protocol=ttrpc version=3 Sep 10 23:50:59.424718 systemd[1]: Started cri-containerd-ffd2e195b292032ade7472fc0bc2e1b5df95f361b199436b4e758d90601d12a2.scope - libcontainer container ffd2e195b292032ade7472fc0bc2e1b5df95f361b199436b4e758d90601d12a2. Sep 10 23:50:59.559863 containerd[2003]: time="2025-09-10T23:50:59.559804879Z" level=info msg="StartContainer for \"ffd2e195b292032ade7472fc0bc2e1b5df95f361b199436b4e758d90601d12a2\" returns successfully" Sep 10 23:50:59.735130 containerd[2003]: time="2025-09-10T23:50:59.735052568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8\" id:\"88c85ba7f3ad3eae5b257f63b42e6fcb3927ef102c9cd3cd249132cfd06fbf0c\" pid:6012 exited_at:{seconds:1757548259 nanos:734456696}" Sep 10 23:50:59.852295 kubelet[3597]: I0910 23:50:59.851654 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wdxn4" podStartSLOduration=25.656323344 podStartE2EDuration="46.851602125s" podCreationTimestamp="2025-09-10 23:50:13 +0000 UTC" firstStartedPulling="2025-09-10 23:50:38.118914177 +0000 UTC m=+54.254424523" lastFinishedPulling="2025-09-10 23:50:59.31419297 +0000 UTC m=+75.449703304" observedRunningTime="2025-09-10 23:50:59.847695165 +0000 UTC m=+75.983205535" watchObservedRunningTime="2025-09-10 23:50:59.851602125 +0000 UTC m=+75.987112495" Sep 10 23:51:00.009362 containerd[2003]: time="2025-09-10T23:51:00.009281225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10\" id:\"92d841edaa1465c937b4d931fa6dc210ec1ab577a264e5ab5d7aba8ec0d963b4\" pid:6033 exit_status:1 exited_at:{seconds:1757548260 nanos:7496429}" Sep 10 23:51:00.395139 kubelet[3597]: I0910 23:51:00.395080 3597 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 23:51:00.395139 kubelet[3597]: I0910 23:51:00.395151 3597 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 23:51:01.027154 containerd[2003]: time="2025-09-10T23:51:01.027089190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10\" id:\"06fb76b107c1d680692148dbc0fc9ec320cb95f21048d4ed6db7a78e85f1f9b9\" pid:6058 exited_at:{seconds:1757548261 nanos:26650386}" Sep 10 23:51:01.315755 systemd[1]: Started sshd@11-172.31.28.68:22-139.178.68.195:56964.service - OpenSSH per-connection server daemon (139.178.68.195:56964). Sep 10 23:51:01.535765 sshd[6068]: Accepted publickey for core from 139.178.68.195 port 56964 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:01.538959 sshd-session[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:01.549023 systemd-logind[1975]: New session 12 of user core. Sep 10 23:51:01.561520 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 23:51:01.845275 sshd[6071]: Connection closed by 139.178.68.195 port 56964 Sep 10 23:51:01.846296 sshd-session[6068]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:01.854799 systemd[1]: sshd@11-172.31.28.68:22-139.178.68.195:56964.service: Deactivated successfully. Sep 10 23:51:01.862238 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 23:51:01.865572 systemd-logind[1975]: Session 12 logged out. Waiting for processes to exit. Sep 10 23:51:01.887883 systemd[1]: Started sshd@12-172.31.28.68:22-139.178.68.195:56974.service - OpenSSH per-connection server daemon (139.178.68.195:56974). Sep 10 23:51:01.892471 systemd-logind[1975]: Removed session 12. Sep 10 23:51:02.089159 sshd[6085]: Accepted publickey for core from 139.178.68.195 port 56974 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:02.092599 sshd-session[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:02.101452 systemd-logind[1975]: New session 13 of user core. Sep 10 23:51:02.105563 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 23:51:02.438016 sshd[6087]: Connection closed by 139.178.68.195 port 56974 Sep 10 23:51:02.437480 sshd-session[6085]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:02.452563 systemd[1]: sshd@12-172.31.28.68:22-139.178.68.195:56974.service: Deactivated successfully. Sep 10 23:51:02.459678 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 23:51:02.469558 systemd-logind[1975]: Session 13 logged out. Waiting for processes to exit. Sep 10 23:51:02.499474 systemd[1]: Started sshd@13-172.31.28.68:22-139.178.68.195:56978.service - OpenSSH per-connection server daemon (139.178.68.195:56978). Sep 10 23:51:02.502387 systemd-logind[1975]: Removed session 13. Sep 10 23:51:02.710824 sshd[6097]: Accepted publickey for core from 139.178.68.195 port 56978 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:02.713629 sshd-session[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:02.724929 systemd-logind[1975]: New session 14 of user core. Sep 10 23:51:02.733712 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 23:51:03.021100 sshd[6099]: Connection closed by 139.178.68.195 port 56978 Sep 10 23:51:03.021419 sshd-session[6097]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:03.032698 systemd[1]: sshd@13-172.31.28.68:22-139.178.68.195:56978.service: Deactivated successfully. Sep 10 23:51:03.038086 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 23:51:03.043895 systemd-logind[1975]: Session 14 logged out. Waiting for processes to exit. Sep 10 23:51:03.049230 systemd-logind[1975]: Removed session 14. Sep 10 23:51:04.710190 containerd[2003]: time="2025-09-10T23:51:04.710103361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286\" id:\"85a05b74dab2bffc17155773a857c94ce74374d1c22adb9c6d3fbe53f0af97a6\" pid:6123 exited_at:{seconds:1757548264 nanos:709153093}" Sep 10 23:51:06.949097 kubelet[3597]: I0910 23:51:06.948515 3597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:51:08.066720 systemd[1]: Started sshd@14-172.31.28.68:22-139.178.68.195:56986.service - OpenSSH per-connection server daemon (139.178.68.195:56986). Sep 10 23:51:08.299095 sshd[6142]: Accepted publickey for core from 139.178.68.195 port 56986 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:08.304636 sshd-session[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:08.320849 systemd-logind[1975]: New session 15 of user core. Sep 10 23:51:08.330744 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 23:51:08.644567 sshd[6144]: Connection closed by 139.178.68.195 port 56986 Sep 10 23:51:08.644935 sshd-session[6142]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:08.657147 systemd[1]: sshd@14-172.31.28.68:22-139.178.68.195:56986.service: Deactivated successfully. Sep 10 23:51:08.667199 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 23:51:08.672364 systemd-logind[1975]: Session 15 logged out. Waiting for processes to exit. Sep 10 23:51:08.675805 systemd-logind[1975]: Removed session 15. Sep 10 23:51:12.751745 containerd[2003]: time="2025-09-10T23:51:12.751629693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8\" id:\"8a7cb467a68b956ed692e881fc019e1a4d2f5b057c3aed2a94421d0c2ef0ddc7\" pid:6170 exited_at:{seconds:1757548272 nanos:750800529}" Sep 10 23:51:13.686168 systemd[1]: Started sshd@15-172.31.28.68:22-139.178.68.195:45232.service - OpenSSH per-connection server daemon (139.178.68.195:45232). Sep 10 23:51:13.883367 sshd[6181]: Accepted publickey for core from 139.178.68.195 port 45232 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:13.885914 sshd-session[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:13.895103 systemd-logind[1975]: New session 16 of user core. Sep 10 23:51:13.902590 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 23:51:14.231405 sshd[6183]: Connection closed by 139.178.68.195 port 45232 Sep 10 23:51:14.234478 sshd-session[6181]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:14.245969 systemd[1]: sshd@15-172.31.28.68:22-139.178.68.195:45232.service: Deactivated successfully. Sep 10 23:51:14.254200 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 23:51:14.263021 systemd-logind[1975]: Session 16 logged out. Waiting for processes to exit. Sep 10 23:51:14.269574 systemd-logind[1975]: Removed session 16. Sep 10 23:51:19.287873 systemd[1]: Started sshd@16-172.31.28.68:22-139.178.68.195:45244.service - OpenSSH per-connection server daemon (139.178.68.195:45244). Sep 10 23:51:19.540174 sshd[6207]: Accepted publickey for core from 139.178.68.195 port 45244 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:19.544876 sshd-session[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:19.561362 systemd-logind[1975]: New session 17 of user core. Sep 10 23:51:19.571192 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 23:51:19.976151 sshd[6209]: Connection closed by 139.178.68.195 port 45244 Sep 10 23:51:19.979423 sshd-session[6207]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:19.991954 systemd[1]: sshd@16-172.31.28.68:22-139.178.68.195:45244.service: Deactivated successfully. Sep 10 23:51:20.000071 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 23:51:20.005665 systemd-logind[1975]: Session 17 logged out. Waiting for processes to exit. Sep 10 23:51:20.011863 systemd-logind[1975]: Removed session 17. Sep 10 23:51:22.723608 kubelet[3597]: I0910 23:51:22.722842 3597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:51:25.028950 systemd[1]: Started sshd@17-172.31.28.68:22-139.178.68.195:49108.service - OpenSSH per-connection server daemon (139.178.68.195:49108). Sep 10 23:51:25.254143 sshd[6229]: Accepted publickey for core from 139.178.68.195 port 49108 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:25.258376 sshd-session[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:25.267547 systemd-logind[1975]: New session 18 of user core. Sep 10 23:51:25.273623 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 23:51:25.617596 sshd[6231]: Connection closed by 139.178.68.195 port 49108 Sep 10 23:51:25.618912 sshd-session[6229]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:25.632107 systemd[1]: sshd@17-172.31.28.68:22-139.178.68.195:49108.service: Deactivated successfully. Sep 10 23:51:25.639142 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 23:51:25.643497 systemd-logind[1975]: Session 18 logged out. Waiting for processes to exit. Sep 10 23:51:25.669534 systemd[1]: Started sshd@18-172.31.28.68:22-139.178.68.195:49112.service - OpenSSH per-connection server daemon (139.178.68.195:49112). Sep 10 23:51:25.673474 systemd-logind[1975]: Removed session 18. Sep 10 23:51:25.890653 sshd[6243]: Accepted publickey for core from 139.178.68.195 port 49112 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:25.894949 sshd-session[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:25.905829 systemd-logind[1975]: New session 19 of user core. Sep 10 23:51:25.914587 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 23:51:26.738636 sshd[6245]: Connection closed by 139.178.68.195 port 49112 Sep 10 23:51:26.740606 sshd-session[6243]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:26.754822 systemd[1]: sshd@18-172.31.28.68:22-139.178.68.195:49112.service: Deactivated successfully. Sep 10 23:51:26.764279 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 23:51:26.767609 systemd-logind[1975]: Session 19 logged out. Waiting for processes to exit. Sep 10 23:51:26.790848 systemd[1]: Started sshd@19-172.31.28.68:22-139.178.68.195:49120.service - OpenSSH per-connection server daemon (139.178.68.195:49120). Sep 10 23:51:26.793542 systemd-logind[1975]: Removed session 19. Sep 10 23:51:27.020784 sshd[6255]: Accepted publickey for core from 139.178.68.195 port 49120 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:27.024435 sshd-session[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:27.035751 systemd-logind[1975]: New session 20 of user core. Sep 10 23:51:27.043769 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 23:51:28.524417 sshd[6257]: Connection closed by 139.178.68.195 port 49120 Sep 10 23:51:28.528403 sshd-session[6255]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:28.539511 systemd-logind[1975]: Session 20 logged out. Waiting for processes to exit. Sep 10 23:51:28.541815 systemd[1]: sshd@19-172.31.28.68:22-139.178.68.195:49120.service: Deactivated successfully. Sep 10 23:51:28.554891 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 23:51:28.585518 systemd-logind[1975]: Removed session 20. Sep 10 23:51:28.590729 systemd[1]: Started sshd@20-172.31.28.68:22-139.178.68.195:49122.service - OpenSSH per-connection server daemon (139.178.68.195:49122). Sep 10 23:51:28.802481 sshd[6272]: Accepted publickey for core from 139.178.68.195 port 49122 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:28.805613 sshd-session[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:28.821362 systemd-logind[1975]: New session 21 of user core. Sep 10 23:51:28.825539 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 23:51:29.549430 sshd[6276]: Connection closed by 139.178.68.195 port 49122 Sep 10 23:51:29.550079 sshd-session[6272]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:29.564722 systemd[1]: sshd@20-172.31.28.68:22-139.178.68.195:49122.service: Deactivated successfully. Sep 10 23:51:29.564896 systemd-logind[1975]: Session 21 logged out. Waiting for processes to exit. Sep 10 23:51:29.576001 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 23:51:29.599689 systemd[1]: Started sshd@21-172.31.28.68:22-139.178.68.195:49126.service - OpenSSH per-connection server daemon (139.178.68.195:49126). Sep 10 23:51:29.606071 systemd-logind[1975]: Removed session 21. Sep 10 23:51:29.827562 sshd[6286]: Accepted publickey for core from 139.178.68.195 port 49126 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:29.834455 sshd-session[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:29.855386 systemd-logind[1975]: New session 22 of user core. Sep 10 23:51:29.863594 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 23:51:30.049958 containerd[2003]: time="2025-09-10T23:51:30.049882187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10\" id:\"38005680f3c7b34c6631174970bb3f0457716a14203814064da47cc1e5c5f766\" pid:6302 exited_at:{seconds:1757548290 nanos:49381139}" Sep 10 23:51:30.281503 sshd[6293]: Connection closed by 139.178.68.195 port 49126 Sep 10 23:51:30.282347 sshd-session[6286]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:30.292684 systemd[1]: sshd@21-172.31.28.68:22-139.178.68.195:49126.service: Deactivated successfully. Sep 10 23:51:30.300798 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 23:51:30.306428 systemd-logind[1975]: Session 22 logged out. Waiting for processes to exit. Sep 10 23:51:30.310348 systemd-logind[1975]: Removed session 22. Sep 10 23:51:34.871592 containerd[2003]: time="2025-09-10T23:51:34.871213027Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286\" id:\"d5d24304a5e5502db43f44bd1aea8ac43cf477becb1dd2a38114fea1da549b45\" pid:6340 exited_at:{seconds:1757548294 nanos:869781787}" Sep 10 23:51:35.323465 systemd[1]: Started sshd@22-172.31.28.68:22-139.178.68.195:52400.service - OpenSSH per-connection server daemon (139.178.68.195:52400). Sep 10 23:51:35.578819 sshd[6352]: Accepted publickey for core from 139.178.68.195 port 52400 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:35.583720 sshd-session[6352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:35.602381 systemd-logind[1975]: New session 23 of user core. Sep 10 23:51:35.613604 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 23:51:35.906235 sshd[6354]: Connection closed by 139.178.68.195 port 52400 Sep 10 23:51:35.907575 sshd-session[6352]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:35.918373 systemd[1]: sshd@22-172.31.28.68:22-139.178.68.195:52400.service: Deactivated successfully. Sep 10 23:51:35.927796 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 23:51:35.933345 systemd-logind[1975]: Session 23 logged out. Waiting for processes to exit. Sep 10 23:51:35.936007 systemd-logind[1975]: Removed session 23. Sep 10 23:51:40.950686 systemd[1]: Started sshd@23-172.31.28.68:22-139.178.68.195:39814.service - OpenSSH per-connection server daemon (139.178.68.195:39814). Sep 10 23:51:41.200586 sshd[6367]: Accepted publickey for core from 139.178.68.195 port 39814 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:41.204467 sshd-session[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:41.219610 systemd-logind[1975]: New session 24 of user core. Sep 10 23:51:41.224989 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 23:51:41.529668 sshd[6369]: Connection closed by 139.178.68.195 port 39814 Sep 10 23:51:41.530899 sshd-session[6367]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:41.539229 systemd[1]: sshd@23-172.31.28.68:22-139.178.68.195:39814.service: Deactivated successfully. Sep 10 23:51:41.547209 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 23:51:41.551509 systemd-logind[1975]: Session 24 logged out. Waiting for processes to exit. Sep 10 23:51:41.557150 systemd-logind[1975]: Removed session 24. Sep 10 23:51:42.771356 containerd[2003]: time="2025-09-10T23:51:42.771281738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8\" id:\"d40f5c05770c3d5cbd82b6e3e0c1bb82678eb4bcf2fc694b6481ba1634d070bc\" pid:6392 exited_at:{seconds:1757548302 nanos:770591858}" Sep 10 23:51:46.573817 systemd[1]: Started sshd@24-172.31.28.68:22-139.178.68.195:39828.service - OpenSSH per-connection server daemon (139.178.68.195:39828). Sep 10 23:51:46.793312 sshd[6403]: Accepted publickey for core from 139.178.68.195 port 39828 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:46.796791 sshd-session[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:46.807057 systemd-logind[1975]: New session 25 of user core. Sep 10 23:51:46.813622 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 23:51:47.150279 sshd[6405]: Connection closed by 139.178.68.195 port 39828 Sep 10 23:51:47.149328 sshd-session[6403]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:47.160711 systemd[1]: sshd@24-172.31.28.68:22-139.178.68.195:39828.service: Deactivated successfully. Sep 10 23:51:47.167541 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 23:51:47.176799 systemd-logind[1975]: Session 25 logged out. Waiting for processes to exit. Sep 10 23:51:47.181677 systemd-logind[1975]: Removed session 25. Sep 10 23:51:52.187025 systemd[1]: Started sshd@25-172.31.28.68:22-139.178.68.195:49988.service - OpenSSH per-connection server daemon (139.178.68.195:49988). Sep 10 23:51:52.402306 sshd[6419]: Accepted publickey for core from 139.178.68.195 port 49988 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:52.404956 sshd-session[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:52.414505 systemd-logind[1975]: New session 26 of user core. Sep 10 23:51:52.425560 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 23:51:52.726234 sshd[6421]: Connection closed by 139.178.68.195 port 49988 Sep 10 23:51:52.726729 sshd-session[6419]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:52.737628 systemd[1]: sshd@25-172.31.28.68:22-139.178.68.195:49988.service: Deactivated successfully. Sep 10 23:51:52.747767 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 23:51:52.754314 systemd-logind[1975]: Session 26 logged out. Waiting for processes to exit. Sep 10 23:51:52.760374 systemd-logind[1975]: Removed session 26. Sep 10 23:51:57.768711 systemd[1]: Started sshd@26-172.31.28.68:22-139.178.68.195:49994.service - OpenSSH per-connection server daemon (139.178.68.195:49994). Sep 10 23:51:57.982599 sshd[6434]: Accepted publickey for core from 139.178.68.195 port 49994 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:57.985491 sshd-session[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:57.997512 systemd-logind[1975]: New session 27 of user core. Sep 10 23:51:58.003823 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 10 23:51:58.301298 sshd[6436]: Connection closed by 139.178.68.195 port 49994 Sep 10 23:51:58.301280 sshd-session[6434]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:58.310691 systemd[1]: sshd@26-172.31.28.68:22-139.178.68.195:49994.service: Deactivated successfully. Sep 10 23:51:58.316651 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 23:51:58.326068 systemd-logind[1975]: Session 27 logged out. Waiting for processes to exit. Sep 10 23:51:58.330084 systemd-logind[1975]: Removed session 27. Sep 10 23:51:59.767548 containerd[2003]: time="2025-09-10T23:51:59.767488098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8\" id:\"8aceb8d7c62cbfb5a823e62f6d57758f33f60a3d994bf872f387c08b3c09d2f5\" pid:6466 exited_at:{seconds:1757548319 nanos:766745034}" Sep 10 23:51:59.986955 containerd[2003]: time="2025-09-10T23:51:59.986615671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10\" id:\"982d8cff4c1f033eb734ee1fa54cbfadce920554625072d14b02fd32aa82de1b\" pid:6488 exited_at:{seconds:1757548319 nanos:986066059}" Sep 10 23:52:01.052109 containerd[2003]: time="2025-09-10T23:52:01.052039325Z" level=info msg="TaskExit event in podsandbox handler container_id:\"987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10\" id:\"4159c659c772aeda559628ed942a74d4732d11487a25a9e9c43dde301138bded\" pid:6514 exited_at:{seconds:1757548321 nanos:50925461}" Sep 10 23:52:03.348232 systemd[1]: Started sshd@27-172.31.28.68:22-139.178.68.195:53168.service - OpenSSH per-connection server daemon (139.178.68.195:53168). Sep 10 23:52:03.565355 sshd[6524]: Accepted publickey for core from 139.178.68.195 port 53168 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:52:03.569123 sshd-session[6524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:52:03.582353 systemd-logind[1975]: New session 28 of user core. Sep 10 23:52:03.586840 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 10 23:52:03.900076 sshd[6526]: Connection closed by 139.178.68.195 port 53168 Sep 10 23:52:03.900609 sshd-session[6524]: pam_unix(sshd:session): session closed for user core Sep 10 23:52:03.912298 systemd[1]: sshd@27-172.31.28.68:22-139.178.68.195:53168.service: Deactivated successfully. Sep 10 23:52:03.920742 systemd[1]: session-28.scope: Deactivated successfully. Sep 10 23:52:03.923524 systemd-logind[1975]: Session 28 logged out. Waiting for processes to exit. Sep 10 23:52:03.928160 systemd-logind[1975]: Removed session 28. Sep 10 23:52:04.723467 containerd[2003]: time="2025-09-10T23:52:04.723347375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286\" id:\"b57ac4fa15735e8eaf9cf6e47bf38690d24a2521e1f2d9e2d0446594a1f76e68\" pid:6549 exited_at:{seconds:1757548324 nanos:721161551}" Sep 10 23:52:12.746071 containerd[2003]: time="2025-09-10T23:52:12.746011147Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9aaf0d3252edcd0bb1d7abf8eb477b82f4b4c077321ac9611889216f16774f8\" id:\"01adbdd66a98cad921c2fe426740e71a514e65bd3cde68baa3ad856c574b70fc\" pid:6582 exited_at:{seconds:1757548332 nanos:745507747}" Sep 10 23:52:17.521459 systemd[1]: cri-containerd-cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2.scope: Deactivated successfully. Sep 10 23:52:17.522064 systemd[1]: cri-containerd-cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2.scope: Consumed 29.356s CPU time, 106.6M memory peak, 192K read from disk. Sep 10 23:52:17.530900 containerd[2003]: time="2025-09-10T23:52:17.530791090Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2\" id:\"cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2\" pid:3912 exit_status:1 exited_at:{seconds:1757548337 nanos:529975750}" Sep 10 23:52:17.532197 containerd[2003]: time="2025-09-10T23:52:17.530988310Z" level=info msg="received exit event container_id:\"cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2\" id:\"cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2\" pid:3912 exit_status:1 exited_at:{seconds:1757548337 nanos:529975750}" Sep 10 23:52:17.578637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2-rootfs.mount: Deactivated successfully. Sep 10 23:52:17.591619 kubelet[3597]: E0910 23:52:17.590916 3597 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-68?timeout=10s\": context deadline exceeded" Sep 10 23:52:18.167824 kubelet[3597]: I0910 23:52:18.167684 3597 scope.go:117] "RemoveContainer" containerID="cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2" Sep 10 23:52:18.185512 containerd[2003]: time="2025-09-10T23:52:18.185436574Z" level=info msg="CreateContainer within sandbox \"ea84c91bb50f5847f52238848e43b7fe2bdbe10c07e0f763d6a9c0afd6b8692a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 10 23:52:18.205325 containerd[2003]: time="2025-09-10T23:52:18.202993522Z" level=info msg="Container 5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:52:18.218212 containerd[2003]: time="2025-09-10T23:52:18.218052166Z" level=info msg="CreateContainer within sandbox \"ea84c91bb50f5847f52238848e43b7fe2bdbe10c07e0f763d6a9c0afd6b8692a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc\"" Sep 10 23:52:18.218969 containerd[2003]: time="2025-09-10T23:52:18.218902150Z" level=info msg="StartContainer for \"5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc\"" Sep 10 23:52:18.220683 containerd[2003]: time="2025-09-10T23:52:18.220611490Z" level=info msg="connecting to shim 5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc" address="unix:///run/containerd/s/f3b6752b068aea0fb63def2a131a1c0da4170c250323cfbce22a4c49a9736986" protocol=ttrpc version=3 Sep 10 23:52:18.272579 systemd[1]: Started cri-containerd-5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc.scope - libcontainer container 5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc. Sep 10 23:52:18.340065 containerd[2003]: time="2025-09-10T23:52:18.338641006Z" level=info msg="StartContainer for \"5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc\" returns successfully" Sep 10 23:52:18.388149 systemd[1]: cri-containerd-5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3.scope: Deactivated successfully. Sep 10 23:52:18.388708 systemd[1]: cri-containerd-5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3.scope: Consumed 6.430s CPU time, 59.9M memory peak, 128K read from disk. Sep 10 23:52:18.396110 containerd[2003]: time="2025-09-10T23:52:18.395492951Z" level=info msg="received exit event container_id:\"5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3\" id:\"5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3\" pid:3434 exit_status:1 exited_at:{seconds:1757548338 nanos:394609331}" Sep 10 23:52:18.397624 containerd[2003]: time="2025-09-10T23:52:18.396346055Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3\" id:\"5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3\" pid:3434 exit_status:1 exited_at:{seconds:1757548338 nanos:394609331}" Sep 10 23:52:18.472618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3-rootfs.mount: Deactivated successfully. Sep 10 23:52:19.189189 kubelet[3597]: I0910 23:52:19.189141 3597 scope.go:117] "RemoveContainer" containerID="5558d4d1e4506773fcf28f400c34269bea299089628f366fda3c055b2e8196a3" Sep 10 23:52:19.193920 containerd[2003]: time="2025-09-10T23:52:19.193839515Z" level=info msg="CreateContainer within sandbox \"080dcdebefa61286a396facef1badcf593357a0138a256c71fac075e6b31ceee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 10 23:52:19.212707 containerd[2003]: time="2025-09-10T23:52:19.212642375Z" level=info msg="Container 7416121a0f2999f24e0b4e3ef9c5219ae0c71fc68aa2b963d1c91284457bbfc9: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:52:19.232108 containerd[2003]: time="2025-09-10T23:52:19.232027523Z" level=info msg="CreateContainer within sandbox \"080dcdebefa61286a396facef1badcf593357a0138a256c71fac075e6b31ceee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7416121a0f2999f24e0b4e3ef9c5219ae0c71fc68aa2b963d1c91284457bbfc9\"" Sep 10 23:52:19.233304 containerd[2003]: time="2025-09-10T23:52:19.233154059Z" level=info msg="StartContainer for \"7416121a0f2999f24e0b4e3ef9c5219ae0c71fc68aa2b963d1c91284457bbfc9\"" Sep 10 23:52:19.235476 containerd[2003]: time="2025-09-10T23:52:19.235425071Z" level=info msg="connecting to shim 7416121a0f2999f24e0b4e3ef9c5219ae0c71fc68aa2b963d1c91284457bbfc9" address="unix:///run/containerd/s/df8295a82863a08f56a141eac0823bbe8c482caf2ce487cc5a128582f257c504" protocol=ttrpc version=3 Sep 10 23:52:19.291584 systemd[1]: Started cri-containerd-7416121a0f2999f24e0b4e3ef9c5219ae0c71fc68aa2b963d1c91284457bbfc9.scope - libcontainer container 7416121a0f2999f24e0b4e3ef9c5219ae0c71fc68aa2b963d1c91284457bbfc9. Sep 10 23:52:19.381995 containerd[2003]: time="2025-09-10T23:52:19.381895128Z" level=info msg="StartContainer for \"7416121a0f2999f24e0b4e3ef9c5219ae0c71fc68aa2b963d1c91284457bbfc9\" returns successfully" Sep 10 23:52:24.012433 systemd[1]: cri-containerd-24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17.scope: Deactivated successfully. Sep 10 23:52:24.016080 systemd[1]: cri-containerd-24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17.scope: Consumed 4.600s CPU time, 20.9M memory peak, 64K read from disk. Sep 10 23:52:24.021147 containerd[2003]: time="2025-09-10T23:52:24.020910219Z" level=info msg="received exit event container_id:\"24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17\" id:\"24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17\" pid:3444 exit_status:1 exited_at:{seconds:1757548344 nanos:20390859}" Sep 10 23:52:24.023681 containerd[2003]: time="2025-09-10T23:52:24.023595747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17\" id:\"24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17\" pid:3444 exit_status:1 exited_at:{seconds:1757548344 nanos:20390859}" Sep 10 23:52:24.075534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17-rootfs.mount: Deactivated successfully. Sep 10 23:52:24.216810 kubelet[3597]: I0910 23:52:24.216778 3597 scope.go:117] "RemoveContainer" containerID="24f710d3f78eb8651aa8c23173d4341fc1885de3164d9e49e66dbf214fa71f17" Sep 10 23:52:24.220875 containerd[2003]: time="2025-09-10T23:52:24.220819084Z" level=info msg="CreateContainer within sandbox \"c5750422c6a4bbd1434be48ecb07166b214be4152f836db3ee868c50fae074dc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 10 23:52:24.244026 containerd[2003]: time="2025-09-10T23:52:24.240892684Z" level=info msg="Container 2252d813725001ea9c0d039891fd6f89d7006b16ad1deaf3c8730ba8c9dcb048: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:52:24.253617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712307126.mount: Deactivated successfully. Sep 10 23:52:24.264360 containerd[2003]: time="2025-09-10T23:52:24.264125248Z" level=info msg="CreateContainer within sandbox \"c5750422c6a4bbd1434be48ecb07166b214be4152f836db3ee868c50fae074dc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2252d813725001ea9c0d039891fd6f89d7006b16ad1deaf3c8730ba8c9dcb048\"" Sep 10 23:52:24.266015 containerd[2003]: time="2025-09-10T23:52:24.265855420Z" level=info msg="StartContainer for \"2252d813725001ea9c0d039891fd6f89d7006b16ad1deaf3c8730ba8c9dcb048\"" Sep 10 23:52:24.269098 containerd[2003]: time="2025-09-10T23:52:24.268965232Z" level=info msg="connecting to shim 2252d813725001ea9c0d039891fd6f89d7006b16ad1deaf3c8730ba8c9dcb048" address="unix:///run/containerd/s/77e389dce019b8358e7b7cec0fac7447f4f74499b2da34ebb15946af291bf560" protocol=ttrpc version=3 Sep 10 23:52:24.311602 systemd[1]: Started cri-containerd-2252d813725001ea9c0d039891fd6f89d7006b16ad1deaf3c8730ba8c9dcb048.scope - libcontainer container 2252d813725001ea9c0d039891fd6f89d7006b16ad1deaf3c8730ba8c9dcb048. Sep 10 23:52:24.402901 containerd[2003]: time="2025-09-10T23:52:24.402423305Z" level=info msg="StartContainer for \"2252d813725001ea9c0d039891fd6f89d7006b16ad1deaf3c8730ba8c9dcb048\" returns successfully" Sep 10 23:52:27.592283 kubelet[3597]: E0910 23:52:27.592079 3597 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-68?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 10 23:52:29.823345 systemd[1]: cri-containerd-5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc.scope: Deactivated successfully. Sep 10 23:52:29.824849 containerd[2003]: time="2025-09-10T23:52:29.824723123Z" level=info msg="received exit event container_id:\"5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc\" id:\"5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc\" pid:6629 exit_status:1 exited_at:{seconds:1757548349 nanos:823750679}" Sep 10 23:52:29.827478 containerd[2003]: time="2025-09-10T23:52:29.827420436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc\" id:\"5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc\" pid:6629 exit_status:1 exited_at:{seconds:1757548349 nanos:823750679}" Sep 10 23:52:29.882512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc-rootfs.mount: Deactivated successfully. Sep 10 23:52:29.967399 containerd[2003]: time="2025-09-10T23:52:29.966860172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"987f48241e0d9608f2679cb221363d03c840944ed57831a97a34a2a9ca18dc10\" id:\"586b2d9a1e6a4c037444cd9774a2d88029fcd043c5c48d1741932f1ce1f09dce\" pid:6758 exited_at:{seconds:1757548349 nanos:964646568}" Sep 10 23:52:30.248113 kubelet[3597]: I0910 23:52:30.247866 3597 scope.go:117] "RemoveContainer" containerID="cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2" Sep 10 23:52:30.250293 kubelet[3597]: I0910 23:52:30.249043 3597 scope.go:117] "RemoveContainer" containerID="5f29c414d692a39ac5ab491c031ef19bfd10ead86f0ec5d16d7b242c9111f7dc" Sep 10 23:52:30.250293 kubelet[3597]: E0910 23:52:30.250182 3597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-4gc74_tigera-operator(0924b086-6370-4062-b4f3-4db68f99de1f)\"" pod="tigera-operator/tigera-operator-755d956888-4gc74" podUID="0924b086-6370-4062-b4f3-4db68f99de1f" Sep 10 23:52:30.252832 containerd[2003]: time="2025-09-10T23:52:30.252771382Z" level=info msg="RemoveContainer for \"cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2\"" Sep 10 23:52:30.264653 containerd[2003]: time="2025-09-10T23:52:30.264412198Z" level=info msg="RemoveContainer for \"cb0eae00ca07c315076aa34d3ba568f727e2ad5ffcf0e4e3d50e37b55c928dc2\" returns successfully" Sep 10 23:52:34.645619 containerd[2003]: time="2025-09-10T23:52:34.645337539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3bfc85ac6081ef70416d3522905ad12d4c8c0ccc63656fa95ba62e90a7d4286\" id:\"6860073801dad3e34bd7457f63c1891eb8794bb5a0df1ab3b8693588c1a69174\" pid:6787 exited_at:{seconds:1757548354 nanos:644848083}" Sep 10 23:52:37.594049 kubelet[3597]: E0910 23:52:37.593218 3597 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-68?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"