Jul 15 23:13:00.171116 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 15 23:13:00.171168 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 22:00:45 -00 2025 Jul 15 23:13:00.171194 kernel: KASLR disabled due to lack of seed Jul 15 23:13:00.171212 kernel: efi: EFI v2.7 by EDK II Jul 15 23:13:00.171229 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Jul 15 23:13:00.171283 kernel: secureboot: Secure boot disabled Jul 15 23:13:00.171301 kernel: ACPI: Early table checksum verification disabled Jul 15 23:13:00.171317 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 15 23:13:00.171333 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 15 23:13:00.171349 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 15 23:13:00.171364 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 15 23:13:00.171388 kernel: ACPI: FACS 0x0000000078630000 000040 Jul 15 23:13:00.171404 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 15 23:13:00.171420 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 15 23:13:00.171438 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 15 23:13:00.171455 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 15 23:13:00.171477 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 15 23:13:00.171494 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 15 23:13:00.171511 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 15 23:13:00.171526 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 15 23:13:00.171543 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 15 23:13:00.171559 kernel: printk: legacy bootconsole [uart0] enabled Jul 15 23:13:00.171575 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 23:13:00.171592 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 15 23:13:00.171609 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Jul 15 23:13:00.171625 kernel: Zone ranges: Jul 15 23:13:00.171642 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 15 23:13:00.171664 kernel: DMA32 empty Jul 15 23:13:00.171681 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 15 23:13:00.171697 kernel: Device empty Jul 15 23:13:00.171713 kernel: Movable zone start for each node Jul 15 23:13:00.171730 kernel: Early memory node ranges Jul 15 23:13:00.171747 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 15 23:13:00.171763 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 15 23:13:00.171780 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 15 23:13:00.171796 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 15 23:13:00.171812 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 15 23:13:00.171829 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 15 23:13:00.171846 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 15 23:13:00.171868 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 15 23:13:00.171893 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 15 23:13:00.171911 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 15 23:13:00.171929 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jul 15 23:13:00.171946 kernel: psci: probing for conduit method from ACPI. Jul 15 23:13:00.171969 kernel: psci: PSCIv1.0 detected in firmware. Jul 15 23:13:00.171986 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 23:13:00.172003 kernel: psci: Trusted OS migration not required Jul 15 23:13:00.172019 kernel: psci: SMC Calling Convention v1.1 Jul 15 23:13:00.172038 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 15 23:13:00.172055 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 23:13:00.172072 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 23:13:00.172091 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 15 23:13:00.172108 kernel: Detected PIPT I-cache on CPU0 Jul 15 23:13:00.172125 kernel: CPU features: detected: GIC system register CPU interface Jul 15 23:13:00.172143 kernel: CPU features: detected: Spectre-v2 Jul 15 23:13:00.172165 kernel: CPU features: detected: Spectre-v3a Jul 15 23:13:00.172213 kernel: CPU features: detected: Spectre-BHB Jul 15 23:13:00.172263 kernel: CPU features: detected: ARM erratum 1742098 Jul 15 23:13:00.172286 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 15 23:13:00.172305 kernel: alternatives: applying boot alternatives Jul 15 23:13:00.172324 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:13:00.172342 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:13:00.172360 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:13:00.172377 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:13:00.172394 kernel: Fallback order for Node 0: 0 Jul 15 23:13:00.172420 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jul 15 23:13:00.172438 kernel: Policy zone: Normal Jul 15 23:13:00.172455 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:13:00.172473 kernel: software IO TLB: area num 2. Jul 15 23:13:00.172490 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Jul 15 23:13:00.172508 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 23:13:00.172525 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:13:00.172543 kernel: rcu: RCU event tracing is enabled. Jul 15 23:13:00.172561 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 23:13:00.172578 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:13:00.172595 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:13:00.172612 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:13:00.172633 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 23:13:00.172650 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:13:00.172668 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:13:00.172685 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 23:13:00.172703 kernel: GICv3: 96 SPIs implemented Jul 15 23:13:00.172720 kernel: GICv3: 0 Extended SPIs implemented Jul 15 23:13:00.172737 kernel: Root IRQ handler: gic_handle_irq Jul 15 23:13:00.172754 kernel: GICv3: GICv3 features: 16 PPIs Jul 15 23:13:00.172771 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 23:13:00.172787 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 15 23:13:00.172804 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 15 23:13:00.172821 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jul 15 23:13:00.172843 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jul 15 23:13:00.172860 kernel: GICv3: using LPI property table @0x0000000400110000 Jul 15 23:13:00.172876 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 15 23:13:00.172893 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jul 15 23:13:00.172910 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:13:00.172928 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 15 23:13:00.172946 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 15 23:13:00.172964 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 15 23:13:00.172983 kernel: Console: colour dummy device 80x25 Jul 15 23:13:00.173001 kernel: printk: legacy console [tty1] enabled Jul 15 23:13:00.173024 kernel: ACPI: Core revision 20240827 Jul 15 23:13:00.173043 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 15 23:13:00.173061 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:13:00.173079 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:13:00.173096 kernel: landlock: Up and running. Jul 15 23:13:00.173113 kernel: SELinux: Initializing. Jul 15 23:13:00.173131 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:13:00.173149 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:13:00.173168 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:13:00.173190 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:13:00.173209 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:13:00.173226 kernel: Remapping and enabling EFI services. Jul 15 23:13:00.173280 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:13:00.173300 kernel: Detected PIPT I-cache on CPU1 Jul 15 23:13:00.173319 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 15 23:13:00.173337 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jul 15 23:13:00.173356 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 15 23:13:00.173374 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 23:13:00.173403 kernel: SMP: Total of 2 processors activated. Jul 15 23:13:00.173436 kernel: CPU: All CPU(s) started at EL1 Jul 15 23:13:00.173456 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 23:13:00.173481 kernel: CPU features: detected: 32-bit EL1 Support Jul 15 23:13:00.173501 kernel: CPU features: detected: CRC32 instructions Jul 15 23:13:00.173518 kernel: alternatives: applying system-wide alternatives Jul 15 23:13:00.173537 kernel: Memory: 3796516K/4030464K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 212600K reserved, 16384K cma-reserved) Jul 15 23:13:00.173556 kernel: devtmpfs: initialized Jul 15 23:13:00.173581 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:13:00.173600 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 23:13:00.173618 kernel: 16912 pages in range for non-PLT usage Jul 15 23:13:00.173637 kernel: 508432 pages in range for PLT usage Jul 15 23:13:00.173657 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:13:00.173676 kernel: SMBIOS 3.0.0 present. Jul 15 23:13:00.173694 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 15 23:13:00.173712 kernel: DMI: Memory slots populated: 0/0 Jul 15 23:13:00.173731 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:13:00.173756 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 23:13:00.173776 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 23:13:00.173794 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 23:13:00.173813 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:13:00.173832 kernel: audit: type=2000 audit(0.234:1): state=initialized audit_enabled=0 res=1 Jul 15 23:13:00.173850 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:13:00.173869 kernel: cpuidle: using governor menu Jul 15 23:13:00.173887 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 23:13:00.173906 kernel: ASID allocator initialised with 65536 entries Jul 15 23:13:00.173931 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:13:00.173949 kernel: Serial: AMBA PL011 UART driver Jul 15 23:13:00.173969 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:13:00.173986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:13:00.174005 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 23:13:00.174023 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 23:13:00.174043 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:13:00.174062 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:13:00.174080 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 23:13:00.174104 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 23:13:00.174123 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:13:00.174141 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:13:00.174160 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:13:00.174178 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 23:13:00.174196 kernel: ACPI: Interpreter enabled Jul 15 23:13:00.174213 kernel: ACPI: Using GIC for interrupt routing Jul 15 23:13:00.174262 kernel: ACPI: MCFG table detected, 1 entries Jul 15 23:13:00.174292 kernel: ACPI: CPU0 has been hot-added Jul 15 23:13:00.174319 kernel: ACPI: CPU1 has been hot-added Jul 15 23:13:00.174338 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 15 23:13:00.174699 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:13:00.174937 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 23:13:00.175156 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 23:13:00.175418 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 15 23:13:00.175626 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 15 23:13:00.175667 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 15 23:13:00.175687 kernel: acpiphp: Slot [1] registered Jul 15 23:13:00.175706 kernel: acpiphp: Slot [2] registered Jul 15 23:13:00.175724 kernel: acpiphp: Slot [3] registered Jul 15 23:13:00.175742 kernel: acpiphp: Slot [4] registered Jul 15 23:13:00.175760 kernel: acpiphp: Slot [5] registered Jul 15 23:13:00.175777 kernel: acpiphp: Slot [6] registered Jul 15 23:13:00.175794 kernel: acpiphp: Slot [7] registered Jul 15 23:13:00.175812 kernel: acpiphp: Slot [8] registered Jul 15 23:13:00.175835 kernel: acpiphp: Slot [9] registered Jul 15 23:13:00.175853 kernel: acpiphp: Slot [10] registered Jul 15 23:13:00.175871 kernel: acpiphp: Slot [11] registered Jul 15 23:13:00.175888 kernel: acpiphp: Slot [12] registered Jul 15 23:13:00.175906 kernel: acpiphp: Slot [13] registered Jul 15 23:13:00.175924 kernel: acpiphp: Slot [14] registered Jul 15 23:13:00.175943 kernel: acpiphp: Slot [15] registered Jul 15 23:13:00.175962 kernel: acpiphp: Slot [16] registered Jul 15 23:13:00.175981 kernel: acpiphp: Slot [17] registered Jul 15 23:13:00.175999 kernel: acpiphp: Slot [18] registered Jul 15 23:13:00.176024 kernel: acpiphp: Slot [19] registered Jul 15 23:13:00.176042 kernel: acpiphp: Slot [20] registered Jul 15 23:13:00.176060 kernel: acpiphp: Slot [21] registered Jul 15 23:13:00.176078 kernel: acpiphp: Slot [22] registered Jul 15 23:13:00.176096 kernel: acpiphp: Slot [23] registered Jul 15 23:13:00.176113 kernel: acpiphp: Slot [24] registered Jul 15 23:13:00.176131 kernel: acpiphp: Slot [25] registered Jul 15 23:13:00.176149 kernel: acpiphp: Slot [26] registered Jul 15 23:13:00.176166 kernel: acpiphp: Slot [27] registered Jul 15 23:13:00.176217 kernel: acpiphp: Slot [28] registered Jul 15 23:13:00.176268 kernel: acpiphp: Slot [29] registered Jul 15 23:13:00.176298 kernel: acpiphp: Slot [30] registered Jul 15 23:13:00.176320 kernel: acpiphp: Slot [31] registered Jul 15 23:13:00.176338 kernel: PCI host bridge to bus 0000:00 Jul 15 23:13:00.176601 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 15 23:13:00.176800 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 23:13:00.176987 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 15 23:13:00.177181 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 15 23:13:00.177477 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:13:00.177744 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jul 15 23:13:00.177977 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jul 15 23:13:00.178212 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jul 15 23:13:00.178507 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jul 15 23:13:00.178907 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 15 23:13:00.179182 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jul 15 23:13:00.179531 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jul 15 23:13:00.179769 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jul 15 23:13:00.179980 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jul 15 23:13:00.180220 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 15 23:13:00.180500 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Jul 15 23:13:00.180712 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Jul 15 23:13:00.180907 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Jul 15 23:13:00.181108 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Jul 15 23:13:00.181363 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Jul 15 23:13:00.181561 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 15 23:13:00.181757 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 23:13:00.181943 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 15 23:13:00.181981 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 23:13:00.182001 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 23:13:00.182020 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 23:13:00.182038 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 23:13:00.182056 kernel: iommu: Default domain type: Translated Jul 15 23:13:00.182074 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 23:13:00.182092 kernel: efivars: Registered efivars operations Jul 15 23:13:00.182109 kernel: vgaarb: loaded Jul 15 23:13:00.182128 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 23:13:00.182153 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:13:00.182173 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:13:00.182192 kernel: pnp: PnP ACPI init Jul 15 23:13:00.184576 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 15 23:13:00.184629 kernel: pnp: PnP ACPI: found 1 devices Jul 15 23:13:00.184650 kernel: NET: Registered PF_INET protocol family Jul 15 23:13:00.184670 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:13:00.184690 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 23:13:00.184710 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:13:00.184768 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:13:00.184788 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 23:13:00.184807 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 23:13:00.184826 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:13:00.184844 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:13:00.184863 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:13:00.184881 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:13:00.184899 kernel: kvm [1]: HYP mode not available Jul 15 23:13:00.184917 kernel: Initialise system trusted keyrings Jul 15 23:13:00.184942 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 23:13:00.184961 kernel: Key type asymmetric registered Jul 15 23:13:00.184980 kernel: Asymmetric key parser 'x509' registered Jul 15 23:13:00.184998 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 23:13:00.185017 kernel: io scheduler mq-deadline registered Jul 15 23:13:00.185035 kernel: io scheduler kyber registered Jul 15 23:13:00.185054 kernel: io scheduler bfq registered Jul 15 23:13:00.185363 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 15 23:13:00.185409 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 23:13:00.185429 kernel: ACPI: button: Power Button [PWRB] Jul 15 23:13:00.185447 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 15 23:13:00.185466 kernel: ACPI: button: Sleep Button [SLPB] Jul 15 23:13:00.185485 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:13:00.185504 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 15 23:13:00.185746 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 15 23:13:00.185780 kernel: printk: legacy console [ttyS0] disabled Jul 15 23:13:00.185800 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 15 23:13:00.185827 kernel: printk: legacy console [ttyS0] enabled Jul 15 23:13:00.185846 kernel: printk: legacy bootconsole [uart0] disabled Jul 15 23:13:00.185893 kernel: thunder_xcv, ver 1.0 Jul 15 23:13:00.185916 kernel: thunder_bgx, ver 1.0 Jul 15 23:13:00.185934 kernel: nicpf, ver 1.0 Jul 15 23:13:00.185953 kernel: nicvf, ver 1.0 Jul 15 23:13:00.186215 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 23:13:00.186484 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T23:12:59 UTC (1752621179) Jul 15 23:13:00.186525 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 23:13:00.186545 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jul 15 23:13:00.186564 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:13:00.186582 kernel: watchdog: NMI not fully supported Jul 15 23:13:00.186601 kernel: Segment Routing with IPv6 Jul 15 23:13:00.186620 kernel: watchdog: Hard watchdog permanently disabled Jul 15 23:13:00.186639 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:13:00.186658 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:13:00.186677 kernel: Key type dns_resolver registered Jul 15 23:13:00.186701 kernel: registered taskstats version 1 Jul 15 23:13:00.186721 kernel: Loading compiled-in X.509 certificates Jul 15 23:13:00.186739 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 2e049b1166d7080a2074348abe7e86e115624bdd' Jul 15 23:13:00.186757 kernel: Demotion targets for Node 0: null Jul 15 23:13:00.186775 kernel: Key type .fscrypt registered Jul 15 23:13:00.186793 kernel: Key type fscrypt-provisioning registered Jul 15 23:13:00.186811 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:13:00.186829 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:13:00.186847 kernel: ima: No architecture policies found Jul 15 23:13:00.186870 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 23:13:00.186889 kernel: clk: Disabling unused clocks Jul 15 23:13:00.186907 kernel: PM: genpd: Disabling unused power domains Jul 15 23:13:00.186925 kernel: Warning: unable to open an initial console. Jul 15 23:13:00.186943 kernel: Freeing unused kernel memory: 39488K Jul 15 23:13:00.186962 kernel: Run /init as init process Jul 15 23:13:00.186980 kernel: with arguments: Jul 15 23:13:00.186998 kernel: /init Jul 15 23:13:00.187016 kernel: with environment: Jul 15 23:13:00.187038 kernel: HOME=/ Jul 15 23:13:00.187057 kernel: TERM=linux Jul 15 23:13:00.187074 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:13:00.187095 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:13:00.187120 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:13:00.187141 systemd[1]: Detected virtualization amazon. Jul 15 23:13:00.187160 systemd[1]: Detected architecture arm64. Jul 15 23:13:00.187179 systemd[1]: Running in initrd. Jul 15 23:13:00.187204 systemd[1]: No hostname configured, using default hostname. Jul 15 23:13:00.187224 systemd[1]: Hostname set to . Jul 15 23:13:00.187282 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:13:00.187303 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:13:00.187323 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:13:00.187343 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:13:00.187364 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:13:00.187384 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:13:00.187413 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:13:00.187435 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:13:00.187457 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:13:00.187478 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:13:00.187498 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:13:00.187517 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:13:00.187543 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:13:00.187563 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:13:00.187583 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:13:00.187602 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:13:00.187622 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:13:00.187642 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:13:00.187661 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:13:00.187681 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:13:00.187701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:13:00.187726 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:13:00.187746 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:13:00.187766 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:13:00.187786 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:13:00.187806 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:13:00.187826 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:13:00.187846 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:13:00.187866 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:13:00.187886 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:13:00.187911 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:13:00.187931 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:13:00.187951 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:13:00.187971 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:13:00.187996 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:13:00.188017 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:13:00.188095 systemd-journald[257]: Collecting audit messages is disabled. Jul 15 23:13:00.188140 systemd-journald[257]: Journal started Jul 15 23:13:00.188206 systemd-journald[257]: Runtime Journal (/run/log/journal/ec2afb85db2797974194bb98905e2eb6) is 8M, max 75.3M, 67.3M free. Jul 15 23:13:00.169374 systemd-modules-load[259]: Inserted module 'overlay' Jul 15 23:13:00.201792 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:13:00.203347 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:13:00.216270 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:13:00.218421 systemd-modules-load[259]: Inserted module 'br_netfilter' Jul 15 23:13:00.221520 kernel: Bridge firewalling registered Jul 15 23:13:00.221851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:13:00.233549 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:13:00.241390 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:13:00.248494 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:13:00.262028 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:13:00.271699 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:13:00.280467 systemd-tmpfiles[276]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:13:00.303042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:13:00.319985 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:13:00.324727 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:13:00.341626 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:13:00.346804 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:13:00.368501 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:13:00.399046 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:13:00.471283 systemd-resolved[300]: Positive Trust Anchors: Jul 15 23:13:00.471322 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:13:00.471388 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:13:00.577276 kernel: SCSI subsystem initialized Jul 15 23:13:00.587270 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:13:00.598270 kernel: iscsi: registered transport (tcp) Jul 15 23:13:00.621284 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:13:00.621363 kernel: QLogic iSCSI HBA Driver Jul 15 23:13:00.657432 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:13:00.698021 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:13:00.715393 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:13:00.750268 kernel: random: crng init done Jul 15 23:13:00.749816 systemd-resolved[300]: Defaulting to hostname 'linux'. Jul 15 23:13:00.754297 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:13:00.760270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:13:00.806708 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:13:00.822572 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:13:00.905288 kernel: raid6: neonx8 gen() 6471 MB/s Jul 15 23:13:00.922278 kernel: raid6: neonx4 gen() 6433 MB/s Jul 15 23:13:00.939273 kernel: raid6: neonx2 gen() 5323 MB/s Jul 15 23:13:00.956277 kernel: raid6: neonx1 gen() 3925 MB/s Jul 15 23:13:00.973281 kernel: raid6: int64x8 gen() 3622 MB/s Jul 15 23:13:00.990283 kernel: raid6: int64x4 gen() 3681 MB/s Jul 15 23:13:01.007276 kernel: raid6: int64x2 gen() 3549 MB/s Jul 15 23:13:01.025292 kernel: raid6: int64x1 gen() 2764 MB/s Jul 15 23:13:01.025351 kernel: raid6: using algorithm neonx8 gen() 6471 MB/s Jul 15 23:13:01.044690 kernel: raid6: .... xor() 4752 MB/s, rmw enabled Jul 15 23:13:01.044746 kernel: raid6: using neon recovery algorithm Jul 15 23:13:01.053733 kernel: xor: measuring software checksum speed Jul 15 23:13:01.053797 kernel: 8regs : 12531 MB/sec Jul 15 23:13:01.054923 kernel: 32regs : 12736 MB/sec Jul 15 23:13:01.057273 kernel: arm64_neon : 8574 MB/sec Jul 15 23:13:01.057312 kernel: xor: using function: 32regs (12736 MB/sec) Jul 15 23:13:01.150289 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:13:01.163216 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:13:01.179994 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:13:01.229363 systemd-udevd[508]: Using default interface naming scheme 'v255'. Jul 15 23:13:01.239504 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:13:01.255202 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:13:01.297794 dracut-pre-trigger[518]: rd.md=0: removing MD RAID activation Jul 15 23:13:01.341004 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:13:01.350959 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:13:01.481286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:13:01.490165 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:13:01.626509 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 23:13:01.626583 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 15 23:13:01.640355 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 15 23:13:01.640698 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 15 23:13:01.659269 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:0b:04:30:73:13 Jul 15 23:13:01.671881 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 15 23:13:01.671953 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 15 23:13:01.671801 (udev-worker)[579]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:13:01.684670 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:13:01.687466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:13:01.694223 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:13:01.701521 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:13:01.709323 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 15 23:13:01.708679 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:13:01.719791 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:13:01.719858 kernel: GPT:9289727 != 16777215 Jul 15 23:13:01.719892 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:13:01.721029 kernel: GPT:9289727 != 16777215 Jul 15 23:13:01.721792 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:13:01.723726 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 23:13:01.754165 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:13:01.772308 kernel: nvme nvme0: using unchecked data buffer Jul 15 23:13:01.940126 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 15 23:13:01.969152 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 15 23:13:01.974382 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:13:02.003662 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 23:13:02.043825 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 15 23:13:02.048746 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 15 23:13:02.054503 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:13:02.061043 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:13:02.066462 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:13:02.074310 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:13:02.082302 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:13:02.115803 disk-uuid[687]: Primary Header is updated. Jul 15 23:13:02.115803 disk-uuid[687]: Secondary Entries is updated. Jul 15 23:13:02.115803 disk-uuid[687]: Secondary Header is updated. Jul 15 23:13:02.133791 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:13:02.142274 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 23:13:02.152281 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 23:13:03.155312 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 23:13:03.157374 disk-uuid[692]: The operation has completed successfully. Jul 15 23:13:03.346839 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:13:03.347068 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:13:03.454110 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:13:03.490121 sh[953]: Success Jul 15 23:13:03.519364 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:13:03.519467 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:13:03.522006 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:13:03.534262 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 23:13:03.646944 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:13:03.662447 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:13:03.676317 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:13:03.707484 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:13:03.707581 kernel: BTRFS: device fsid e70e9257-c19d-4e0a-b2ee-631da7d0eb2b devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (976) Jul 15 23:13:03.712556 kernel: BTRFS info (device dm-0): first mount of filesystem e70e9257-c19d-4e0a-b2ee-631da7d0eb2b Jul 15 23:13:03.712640 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:13:03.713862 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:13:03.824058 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:13:03.830878 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:13:03.835915 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:13:03.841750 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:13:03.851511 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:13:03.921318 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1009) Jul 15 23:13:03.926425 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:13:03.926511 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:13:03.927887 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 23:13:03.952334 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:13:03.954665 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:13:03.966187 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:13:04.052085 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:13:04.061433 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:13:04.141825 systemd-networkd[1145]: lo: Link UP Jul 15 23:13:04.143615 systemd-networkd[1145]: lo: Gained carrier Jul 15 23:13:04.148520 systemd-networkd[1145]: Enumeration completed Jul 15 23:13:04.148870 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:13:04.152421 systemd-networkd[1145]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:13:04.152429 systemd-networkd[1145]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:13:04.155549 systemd[1]: Reached target network.target - Network. Jul 15 23:13:04.164438 systemd-networkd[1145]: eth0: Link UP Jul 15 23:13:04.164446 systemd-networkd[1145]: eth0: Gained carrier Jul 15 23:13:04.164469 systemd-networkd[1145]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:13:04.201468 systemd-networkd[1145]: eth0: DHCPv4 address 172.31.19.30/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 23:13:04.462874 ignition[1084]: Ignition 2.21.0 Jul 15 23:13:04.462906 ignition[1084]: Stage: fetch-offline Jul 15 23:13:04.466317 ignition[1084]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:04.466355 ignition[1084]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:13:04.467548 ignition[1084]: Ignition finished successfully Jul 15 23:13:04.472503 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:13:04.479468 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 23:13:04.526742 ignition[1156]: Ignition 2.21.0 Jul 15 23:13:04.526781 ignition[1156]: Stage: fetch Jul 15 23:13:04.528312 ignition[1156]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:04.528340 ignition[1156]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:13:04.528646 ignition[1156]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:13:04.541393 ignition[1156]: PUT result: OK Jul 15 23:13:04.545056 ignition[1156]: parsed url from cmdline: "" Jul 15 23:13:04.545267 ignition[1156]: no config URL provided Jul 15 23:13:04.545485 ignition[1156]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:13:04.545514 ignition[1156]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:13:04.545581 ignition[1156]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:13:04.559815 ignition[1156]: PUT result: OK Jul 15 23:13:04.561475 ignition[1156]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 15 23:13:04.564727 ignition[1156]: GET result: OK Jul 15 23:13:04.568716 ignition[1156]: parsing config with SHA512: 712859b105af2bc8fba235f63e04485363a8d32abe7c64c6cafae0138ce2d4808d1035c3ac24415135a38da928aa5071a0ba25ef12984a3dc34c79042e16b8a3 Jul 15 23:13:04.579505 unknown[1156]: fetched base config from "system" Jul 15 23:13:04.580219 ignition[1156]: fetch: fetch complete Jul 15 23:13:04.579529 unknown[1156]: fetched base config from "system" Jul 15 23:13:04.580273 ignition[1156]: fetch: fetch passed Jul 15 23:13:04.579543 unknown[1156]: fetched user config from "aws" Jul 15 23:13:04.580767 ignition[1156]: Ignition finished successfully Jul 15 23:13:04.600355 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 23:13:04.606376 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:13:04.650930 ignition[1163]: Ignition 2.21.0 Jul 15 23:13:04.650965 ignition[1163]: Stage: kargs Jul 15 23:13:04.651587 ignition[1163]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:04.651615 ignition[1163]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:13:04.651777 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:13:04.655152 ignition[1163]: PUT result: OK Jul 15 23:13:04.674139 ignition[1163]: kargs: kargs passed Jul 15 23:13:04.674329 ignition[1163]: Ignition finished successfully Jul 15 23:13:04.680550 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:13:04.686421 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:13:04.735159 ignition[1170]: Ignition 2.21.0 Jul 15 23:13:04.735310 ignition[1170]: Stage: disks Jul 15 23:13:04.735890 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:04.735916 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:13:04.736079 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:13:04.740228 ignition[1170]: PUT result: OK Jul 15 23:13:04.757058 ignition[1170]: disks: disks passed Jul 15 23:13:04.757210 ignition[1170]: Ignition finished successfully Jul 15 23:13:04.763451 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:13:04.770162 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:13:04.775388 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:13:04.780109 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:13:04.784691 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:13:04.789013 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:13:04.795227 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:13:04.870452 systemd-fsck[1178]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 23:13:04.874511 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:13:04.883999 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:13:05.011296 kernel: EXT4-fs (nvme0n1p9): mounted filesystem db08fdf6-07fd-45a1-bb3b-a7d0399d70fd r/w with ordered data mode. Quota mode: none. Jul 15 23:13:05.013165 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:13:05.018515 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:13:05.023484 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:13:05.035400 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:13:05.047100 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:13:05.047511 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:13:05.047567 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:13:05.068964 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:13:05.075753 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:13:05.089280 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1197) Jul 15 23:13:05.094588 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:13:05.094655 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:13:05.096208 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 23:13:05.105373 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:13:05.451509 initrd-setup-root[1221]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:13:05.473287 initrd-setup-root[1228]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:13:05.494288 initrd-setup-root[1235]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:13:05.507635 initrd-setup-root[1242]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:13:05.812270 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:13:05.820078 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:13:05.826025 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:13:05.861093 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:13:05.866307 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:13:05.898478 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:13:05.915152 ignition[1310]: INFO : Ignition 2.21.0 Jul 15 23:13:05.915152 ignition[1310]: INFO : Stage: mount Jul 15 23:13:05.920036 ignition[1310]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:05.920036 ignition[1310]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:13:05.920036 ignition[1310]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:13:05.920036 ignition[1310]: INFO : PUT result: OK Jul 15 23:13:05.934504 ignition[1310]: INFO : mount: mount passed Jul 15 23:13:05.938333 ignition[1310]: INFO : Ignition finished successfully Jul 15 23:13:05.942340 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:13:05.947383 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:13:06.017221 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:13:06.069302 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1321) Jul 15 23:13:06.073437 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:13:06.073546 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:13:06.073577 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 23:13:06.084571 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:13:06.133903 ignition[1338]: INFO : Ignition 2.21.0 Jul 15 23:13:06.133903 ignition[1338]: INFO : Stage: files Jul 15 23:13:06.137630 ignition[1338]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:06.137630 ignition[1338]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:13:06.137630 ignition[1338]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:13:06.148613 ignition[1338]: INFO : PUT result: OK Jul 15 23:13:06.155043 ignition[1338]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:13:06.158355 ignition[1338]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:13:06.158355 ignition[1338]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:13:06.167411 ignition[1338]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:13:06.171058 ignition[1338]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:13:06.174811 unknown[1338]: wrote ssh authorized keys file for user: core Jul 15 23:13:06.177301 ignition[1338]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:13:06.185477 systemd-networkd[1145]: eth0: Gained IPv6LL Jul 15 23:13:06.199472 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 23:13:06.204382 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 15 23:13:06.437856 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:13:07.321277 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 23:13:07.321277 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:13:07.331069 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:13:07.331069 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:13:07.331069 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:13:07.331069 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:13:07.331069 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:13:07.331069 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:13:07.331069 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:13:07.359283 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:13:07.359283 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:13:07.359283 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 23:13:07.359283 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 23:13:07.359283 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 23:13:07.359283 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 15 23:13:07.814612 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 23:13:08.214746 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 23:13:08.214746 ignition[1338]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 23:13:08.222478 ignition[1338]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:13:08.227064 ignition[1338]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:13:08.231389 ignition[1338]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 23:13:08.231389 ignition[1338]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:13:08.231389 ignition[1338]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:13:08.231389 ignition[1338]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:13:08.231389 ignition[1338]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:13:08.231389 ignition[1338]: INFO : files: files passed Jul 15 23:13:08.231389 ignition[1338]: INFO : Ignition finished successfully Jul 15 23:13:08.254761 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:13:08.262331 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:13:08.279384 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:13:08.289768 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:13:08.290770 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:13:08.313167 initrd-setup-root-after-ignition[1368]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:13:08.313167 initrd-setup-root-after-ignition[1368]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:13:08.328712 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:13:08.335401 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:13:08.341665 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:13:08.345886 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:13:08.448772 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:13:08.449816 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:13:08.457186 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:13:08.461914 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:13:08.464616 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:13:08.471095 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:13:08.529959 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:13:08.537295 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:13:08.586801 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:13:08.592146 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:13:08.596778 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:13:08.599846 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:13:08.600127 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:13:08.609006 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:13:08.616911 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:13:08.622129 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:13:08.627904 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:13:08.634158 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:13:08.639486 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:13:08.643176 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:13:08.648417 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:13:08.656099 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:13:08.660040 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:13:08.666813 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:13:08.669919 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:13:08.670222 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:13:08.678784 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:13:08.684497 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:13:08.687793 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:13:08.692502 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:13:08.695975 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:13:08.696673 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:13:08.706225 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:13:08.706617 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:13:08.711941 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:13:08.712674 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:13:08.725703 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:13:08.734492 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:13:08.743840 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:13:08.746572 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:13:08.752714 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:13:08.753291 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:13:08.780458 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:13:08.782632 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:13:08.801637 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:13:08.813504 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:13:08.815622 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:13:08.822135 ignition[1392]: INFO : Ignition 2.21.0 Jul 15 23:13:08.822135 ignition[1392]: INFO : Stage: umount Jul 15 23:13:08.826121 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:13:08.826121 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:13:08.826121 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:13:08.834888 ignition[1392]: INFO : PUT result: OK Jul 15 23:13:08.845592 ignition[1392]: INFO : umount: umount passed Jul 15 23:13:08.845592 ignition[1392]: INFO : Ignition finished successfully Jul 15 23:13:08.850942 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:13:08.851832 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:13:08.862552 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:13:08.862669 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:13:08.865793 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:13:08.865912 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:13:08.874560 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 23:13:08.874699 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 23:13:08.879441 systemd[1]: Stopped target network.target - Network. Jul 15 23:13:08.881872 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:13:08.882002 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:13:08.885534 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:13:08.888425 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:13:08.893191 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:13:08.896125 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:13:08.898787 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:13:08.906391 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:13:08.906480 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:13:08.909823 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:13:08.909900 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:13:08.914066 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:13:08.914186 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:13:08.917784 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:13:08.917886 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:13:08.922828 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:13:08.922959 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:13:08.927519 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:13:08.930552 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:13:08.973214 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:13:08.980376 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:13:08.989431 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:13:08.992823 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:13:08.993217 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:13:09.006032 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:13:09.007666 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:13:09.014124 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:13:09.014226 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:13:09.020850 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:13:09.028782 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:13:09.028946 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:13:09.032095 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:13:09.032211 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:13:09.044432 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:13:09.047610 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:13:09.053884 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:13:09.054002 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:13:09.057456 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:13:09.071088 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:13:09.071335 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:13:09.091960 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:13:09.097068 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:13:09.101835 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:13:09.101988 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:13:09.110627 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:13:09.110761 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:13:09.117562 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:13:09.117687 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:13:09.125301 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:13:09.125420 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:13:09.132933 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:13:09.133597 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:13:09.142481 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:13:09.148311 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:13:09.148448 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:13:09.160211 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:13:09.160383 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:13:09.188388 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 23:13:09.188505 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:13:09.200348 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:13:09.200464 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:13:09.205348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:13:09.205543 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:13:09.219096 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:13:09.219220 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 23:13:09.219357 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:13:09.219454 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:13:09.220421 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:13:09.220677 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:13:09.227379 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:13:09.227573 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:13:09.236452 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:13:09.248847 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:13:09.293135 systemd[1]: Switching root. Jul 15 23:13:09.337053 systemd-journald[257]: Journal stopped Jul 15 23:13:12.198565 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Jul 15 23:13:12.198712 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:13:12.198759 kernel: SELinux: policy capability open_perms=1 Jul 15 23:13:12.198790 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:13:12.198821 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:13:12.198850 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:13:12.198882 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:13:12.198912 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:13:12.198942 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:13:12.198975 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:13:12.199004 kernel: audit: type=1403 audit(1752621189.810:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:13:12.199044 systemd[1]: Successfully loaded SELinux policy in 88.313ms. Jul 15 23:13:12.199099 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.520ms. Jul 15 23:13:12.199135 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:13:12.199176 systemd[1]: Detected virtualization amazon. Jul 15 23:13:12.199206 systemd[1]: Detected architecture arm64. Jul 15 23:13:12.207317 systemd[1]: Detected first boot. Jul 15 23:13:12.207397 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:13:12.207440 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:13:12.207484 zram_generator::config[1439]: No configuration found. Jul 15 23:13:12.207517 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:13:12.207551 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:13:12.207581 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:13:12.207612 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:13:12.207644 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:13:12.207674 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:13:12.207708 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:13:12.207737 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:13:12.207768 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:13:12.207798 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:13:12.207831 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:13:12.207861 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:13:12.207892 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:13:12.207921 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:13:12.207950 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:13:12.207987 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:13:12.208016 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:13:12.208073 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:13:12.208121 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:13:12.208154 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 23:13:12.208182 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:13:12.208210 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:13:12.208271 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:13:12.208304 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:13:12.208333 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:13:12.208367 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:13:12.208396 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:13:12.208429 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:13:12.208458 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:13:12.208488 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:13:12.208516 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:13:12.208552 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:13:12.208582 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:13:12.208614 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:13:12.208644 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:13:12.208675 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:13:12.208707 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:13:12.208736 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:13:12.208769 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:13:12.208800 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:13:12.208835 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:13:12.208867 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:13:12.208895 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:13:12.208926 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:13:12.221353 systemd[1]: Reached target machines.target - Containers. Jul 15 23:13:12.221495 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:13:12.221544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:13:12.221581 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:13:12.223841 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:13:12.223937 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:13:12.223968 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:13:12.224002 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:13:12.224037 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:13:12.224089 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:13:12.224134 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:13:12.224168 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:13:12.224199 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:13:12.224266 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:13:12.224301 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:13:12.224335 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:13:12.224365 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:13:12.224415 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:13:12.224448 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:13:12.224480 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:13:12.224509 kernel: loop: module loaded Jul 15 23:13:12.224540 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:13:12.224576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:13:12.224613 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:13:12.224643 systemd[1]: Stopped verity-setup.service. Jul 15 23:13:12.224689 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:13:12.224732 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:13:12.224765 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:13:12.224796 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:13:12.224828 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:13:12.224857 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:13:12.224886 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:13:12.224920 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:13:12.224949 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:13:12.224978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:13:12.225007 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:13:12.225040 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:13:12.225069 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:13:12.225098 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:13:12.225130 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:13:12.225157 kernel: fuse: init (API version 7.41) Jul 15 23:13:12.225189 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:13:12.225219 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:13:12.230747 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:13:12.230796 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:13:12.230835 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:13:12.230866 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:13:12.230897 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:13:12.230931 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:13:12.230961 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:13:12.231003 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:13:12.231039 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:13:12.231072 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:13:12.231101 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:13:12.231134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:13:12.231170 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:13:12.231201 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:13:12.231262 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:13:12.231312 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:13:12.231392 systemd-journald[1522]: Collecting audit messages is disabled. Jul 15 23:13:12.231448 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:13:12.231482 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:13:12.231519 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:13:12.231548 kernel: ACPI: bus type drm_connector registered Jul 15 23:13:12.231578 systemd-journald[1522]: Journal started Jul 15 23:13:12.231628 systemd-journald[1522]: Runtime Journal (/run/log/journal/ec2afb85db2797974194bb98905e2eb6) is 8M, max 75.3M, 67.3M free. Jul 15 23:13:11.368834 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:13:12.239336 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:13:12.239416 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:13:11.394977 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 15 23:13:11.395988 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:13:12.252405 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:13:12.276481 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:13:12.316355 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:13:12.321131 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:13:12.330596 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:13:12.348312 kernel: loop0: detected capacity change from 0 to 203944 Jul 15 23:13:12.339769 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:13:12.358491 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:13:12.395444 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Jul 15 23:13:12.395485 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Jul 15 23:13:12.411404 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:13:12.420275 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:13:12.430429 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:13:12.431609 systemd-journald[1522]: Time spent on flushing to /var/log/journal/ec2afb85db2797974194bb98905e2eb6 is 70.881ms for 939 entries. Jul 15 23:13:12.431609 systemd-journald[1522]: System Journal (/var/log/journal/ec2afb85db2797974194bb98905e2eb6) is 8M, max 195.6M, 187.6M free. Jul 15 23:13:12.543529 systemd-journald[1522]: Received client request to flush runtime journal. Jul 15 23:13:12.543643 kernel: loop1: detected capacity change from 0 to 107312 Jul 15 23:13:12.433704 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:13:12.448658 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:13:12.465310 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:13:12.473336 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:13:12.552482 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:13:12.604340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:13:12.624283 kernel: loop2: detected capacity change from 0 to 61240 Jul 15 23:13:12.626590 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:13:12.635666 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:13:12.689775 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jul 15 23:13:12.689818 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jul 15 23:13:12.700344 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:13:12.781596 kernel: loop3: detected capacity change from 0 to 138376 Jul 15 23:13:12.892277 kernel: loop4: detected capacity change from 0 to 203944 Jul 15 23:13:12.928327 kernel: loop5: detected capacity change from 0 to 107312 Jul 15 23:13:12.942271 kernel: loop6: detected capacity change from 0 to 61240 Jul 15 23:13:12.961278 kernel: loop7: detected capacity change from 0 to 138376 Jul 15 23:13:12.978888 (sd-merge)[1600]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 15 23:13:12.980157 (sd-merge)[1600]: Merged extensions into '/usr'. Jul 15 23:13:12.992271 systemd[1]: Reload requested from client PID 1548 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:13:12.992306 systemd[1]: Reloading... Jul 15 23:13:13.190286 zram_generator::config[1622]: No configuration found. Jul 15 23:13:13.475408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:13:13.685282 systemd[1]: Reloading finished in 691 ms. Jul 15 23:13:13.709336 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:13:13.713163 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:13:13.731556 systemd[1]: Starting ensure-sysext.service... Jul 15 23:13:13.738664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:13:13.748581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:13:13.795558 systemd[1]: Reload requested from client PID 1678 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:13:13.795599 systemd[1]: Reloading... Jul 15 23:13:13.806593 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:13:13.807261 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:13:13.808275 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:13:13.809278 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:13:13.811555 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:13:13.812548 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Jul 15 23:13:13.812854 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Jul 15 23:13:13.824056 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:13:13.826473 systemd-tmpfiles[1679]: Skipping /boot Jul 15 23:13:13.872730 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:13:13.872759 systemd-tmpfiles[1679]: Skipping /boot Jul 15 23:13:13.922606 systemd-udevd[1680]: Using default interface naming scheme 'v255'. Jul 15 23:13:14.038287 zram_generator::config[1711]: No configuration found. Jul 15 23:13:14.087291 ldconfig[1545]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:13:14.410123 (udev-worker)[1737]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:13:14.428475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:13:14.761530 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 23:13:14.762476 systemd[1]: Reloading finished in 966 ms. Jul 15 23:13:14.792524 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:13:14.801373 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:13:14.828367 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:13:14.872286 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:13:14.878760 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:13:14.888557 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:13:14.895484 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:13:14.904627 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:13:14.948822 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:13:14.965902 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:13:14.974299 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:13:14.979533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:13:14.991193 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:13:15.012466 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:13:15.020446 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:13:15.020713 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:13:15.030752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:13:15.031160 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:13:15.031439 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:13:15.042906 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:13:15.094083 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:13:15.096681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:13:15.096999 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:13:15.097863 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:13:15.144106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:13:15.144691 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:13:15.147915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:13:15.157207 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:13:15.159405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:13:15.177965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:13:15.181403 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:13:15.193673 systemd[1]: Finished ensure-sysext.service. Jul 15 23:13:15.197379 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:13:15.205659 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:13:15.209429 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:13:15.210675 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:13:15.319716 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:13:15.334788 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:13:15.342749 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:13:15.345484 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:13:15.384539 augenrules[1933]: No rules Jul 15 23:13:15.388847 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:13:15.419381 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:13:15.471136 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:13:15.500285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:13:15.528711 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 23:13:15.548735 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:13:15.636358 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:13:15.655688 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:13:15.735453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:13:15.818844 systemd-resolved[1868]: Positive Trust Anchors: Jul 15 23:13:15.819368 systemd-resolved[1868]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:13:15.819542 systemd-resolved[1868]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:13:15.825116 systemd-networkd[1867]: lo: Link UP Jul 15 23:13:15.825138 systemd-networkd[1867]: lo: Gained carrier Jul 15 23:13:15.828108 systemd-networkd[1867]: Enumeration completed Jul 15 23:13:15.828328 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:13:15.833209 systemd-networkd[1867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:13:15.833249 systemd-networkd[1867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:13:15.835350 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:13:15.838091 systemd-resolved[1868]: Defaulting to hostname 'linux'. Jul 15 23:13:15.842481 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:13:15.843661 systemd-networkd[1867]: eth0: Link UP Jul 15 23:13:15.844120 systemd-networkd[1867]: eth0: Gained carrier Jul 15 23:13:15.844171 systemd-networkd[1867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:13:15.851673 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:13:15.854950 systemd[1]: Reached target network.target - Network. Jul 15 23:13:15.857153 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:13:15.862323 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:13:15.862594 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:13:15.862740 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:13:15.862845 systemd-networkd[1867]: eth0: DHCPv4 address 172.31.19.30/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 23:13:15.871290 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:13:15.875750 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:13:15.878679 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:13:15.881892 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:13:15.882118 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:13:15.884501 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:13:15.890855 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:13:15.896634 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:13:15.906779 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:13:15.910066 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:13:15.913997 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:13:15.930880 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:13:15.934079 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:13:15.938407 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:13:15.941326 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:13:15.943530 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:13:15.946463 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:13:15.946527 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:13:15.965678 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:13:15.974582 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 23:13:15.984486 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:13:15.991720 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:13:16.001137 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:13:16.013654 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:13:16.018897 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:13:16.027181 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:13:16.036387 systemd[1]: Started ntpd.service - Network Time Service. Jul 15 23:13:16.046602 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:13:16.051699 jq[1966]: false Jul 15 23:13:16.058717 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 15 23:13:16.065374 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:13:16.072506 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:13:16.086852 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:13:16.093325 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 23:13:16.099638 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:13:16.107716 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:13:16.118582 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:13:16.127390 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:13:16.136336 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:13:16.140446 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:13:16.142454 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:13:16.252550 tar[1985]: linux-arm64/helm Jul 15 23:13:16.257093 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:13:16.259080 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:13:16.264487 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:13:16.264970 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:13:16.280846 jq[1977]: true Jul 15 23:13:16.290302 extend-filesystems[1967]: Found /dev/nvme0n1p6 Jul 15 23:13:16.296613 ntpd[1969]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:38 UTC 2025 (1): Starting Jul 15 23:13:16.300773 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:38 UTC 2025 (1): Starting Jul 15 23:13:16.300773 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:13:16.300773 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: ---------------------------------------------------- Jul 15 23:13:16.300773 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:13:16.300773 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:13:16.300773 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: corporation. Support and training for ntp-4 are Jul 15 23:13:16.300773 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: available at https://www.nwtime.org/support Jul 15 23:13:16.300773 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: ---------------------------------------------------- Jul 15 23:13:16.296659 ntpd[1969]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:13:16.296679 ntpd[1969]: ---------------------------------------------------- Jul 15 23:13:16.296696 ntpd[1969]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:13:16.296713 ntpd[1969]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:13:16.296730 ntpd[1969]: corporation. Support and training for ntp-4 are Jul 15 23:13:16.296750 ntpd[1969]: available at https://www.nwtime.org/support Jul 15 23:13:16.296767 ntpd[1969]: ---------------------------------------------------- Jul 15 23:13:16.311802 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: proto: precision = 0.096 usec (-23) Jul 15 23:13:16.311074 (ntainerd)[1995]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:13:16.311426 ntpd[1969]: proto: precision = 0.096 usec (-23) Jul 15 23:13:16.313189 ntpd[1969]: basedate set to 2025-07-03 Jul 15 23:13:16.315406 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: basedate set to 2025-07-03 Jul 15 23:13:16.315406 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: gps base set to 2025-07-06 (week 2374) Jul 15 23:13:16.313229 ntpd[1969]: gps base set to 2025-07-06 (week 2374) Jul 15 23:13:16.321206 ntpd[1969]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:13:16.324316 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:13:16.324316 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:13:16.324316 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:13:16.324316 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: Listen normally on 3 eth0 172.31.19.30:123 Jul 15 23:13:16.324316 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: Listen normally on 4 lo [::1]:123 Jul 15 23:13:16.324316 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: bind(21) AF_INET6 fe80::40b:4ff:fe30:7313%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:13:16.324316 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: unable to create socket on eth0 (5) for fe80::40b:4ff:fe30:7313%2#123 Jul 15 23:13:16.324316 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: failed to init interface for address fe80::40b:4ff:fe30:7313%2 Jul 15 23:13:16.324316 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: Listening on routing socket on fd #21 for interface updates Jul 15 23:13:16.323417 ntpd[1969]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:13:16.323725 ntpd[1969]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:13:16.323795 ntpd[1969]: Listen normally on 3 eth0 172.31.19.30:123 Jul 15 23:13:16.323865 ntpd[1969]: Listen normally on 4 lo [::1]:123 Jul 15 23:13:16.323950 ntpd[1969]: bind(21) AF_INET6 fe80::40b:4ff:fe30:7313%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:13:16.323990 ntpd[1969]: unable to create socket on eth0 (5) for fe80::40b:4ff:fe30:7313%2#123 Jul 15 23:13:16.324045 ntpd[1969]: failed to init interface for address fe80::40b:4ff:fe30:7313%2 Jul 15 23:13:16.324110 ntpd[1969]: Listening on routing socket on fd #21 for interface updates Jul 15 23:13:16.333504 extend-filesystems[1967]: Found /dev/nvme0n1p9 Jul 15 23:13:16.349309 ntpd[1969]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:13:16.352153 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:13:16.352153 ntpd[1969]: 15 Jul 23:13:16 ntpd[1969]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:13:16.349371 ntpd[1969]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:13:16.367643 extend-filesystems[1967]: Checking size of /dev/nvme0n1p9 Jul 15 23:13:16.383187 dbus-daemon[1964]: [system] SELinux support is enabled Jul 15 23:13:16.383623 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:13:16.401052 dbus-daemon[1964]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1867 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 23:13:16.390914 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:13:16.390989 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:13:16.394110 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:13:16.394152 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:13:16.404185 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 23:13:16.411555 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 23:13:16.416349 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 15 23:13:16.422005 jq[2010]: true Jul 15 23:13:16.456604 extend-filesystems[1967]: Resized partition /dev/nvme0n1p9 Jul 15 23:13:16.467271 extend-filesystems[2024]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:13:16.483387 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 15 23:13:16.508057 update_engine[1975]: I20250715 23:13:16.505834 1975 main.cc:92] Flatcar Update Engine starting Jul 15 23:13:16.524484 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:13:16.527007 update_engine[1975]: I20250715 23:13:16.526921 1975 update_check_scheduler.cc:74] Next update check in 11m23s Jul 15 23:13:16.573382 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 15 23:13:16.580968 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:13:16.594275 extend-filesystems[2024]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 15 23:13:16.594275 extend-filesystems[2024]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 23:13:16.594275 extend-filesystems[2024]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 15 23:13:16.609032 extend-filesystems[1967]: Resized filesystem in /dev/nvme0n1p9 Jul 15 23:13:16.606800 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:13:16.611356 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:13:16.652121 bash[2040]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:13:16.646097 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:13:16.653435 systemd[1]: Starting sshkeys.service... Jul 15 23:13:16.675292 coreos-metadata[1963]: Jul 15 23:13:16.674 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 23:13:16.683446 coreos-metadata[1963]: Jul 15 23:13:16.683 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 15 23:13:16.683446 coreos-metadata[1963]: Jul 15 23:13:16.683 INFO Fetch successful Jul 15 23:13:16.683446 coreos-metadata[1963]: Jul 15 23:13:16.683 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 15 23:13:16.687456 coreos-metadata[1963]: Jul 15 23:13:16.686 INFO Fetch successful Jul 15 23:13:16.687456 coreos-metadata[1963]: Jul 15 23:13:16.686 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 15 23:13:16.693717 coreos-metadata[1963]: Jul 15 23:13:16.693 INFO Fetch successful Jul 15 23:13:16.693717 coreos-metadata[1963]: Jul 15 23:13:16.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 15 23:13:16.693717 coreos-metadata[1963]: Jul 15 23:13:16.693 INFO Fetch successful Jul 15 23:13:16.693717 coreos-metadata[1963]: Jul 15 23:13:16.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 15 23:13:16.693717 coreos-metadata[1963]: Jul 15 23:13:16.693 INFO Fetch failed with 404: resource not found Jul 15 23:13:16.693717 coreos-metadata[1963]: Jul 15 23:13:16.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 15 23:13:16.693717 coreos-metadata[1963]: Jul 15 23:13:16.693 INFO Fetch successful Jul 15 23:13:16.693717 coreos-metadata[1963]: Jul 15 23:13:16.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 15 23:13:16.695338 coreos-metadata[1963]: Jul 15 23:13:16.695 INFO Fetch successful Jul 15 23:13:16.695338 coreos-metadata[1963]: Jul 15 23:13:16.695 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 15 23:13:16.697597 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:13:16.704622 coreos-metadata[1963]: Jul 15 23:13:16.704 INFO Fetch successful Jul 15 23:13:16.704622 coreos-metadata[1963]: Jul 15 23:13:16.704 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 15 23:13:16.704622 coreos-metadata[1963]: Jul 15 23:13:16.704 INFO Fetch successful Jul 15 23:13:16.704874 coreos-metadata[1963]: Jul 15 23:13:16.704 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 15 23:13:16.713292 coreos-metadata[1963]: Jul 15 23:13:16.709 INFO Fetch successful Jul 15 23:13:16.755761 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 23:13:16.764405 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 23:13:16.899371 systemd-logind[1974]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 23:13:16.900077 systemd-logind[1974]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 15 23:13:16.903135 systemd-logind[1974]: New seat seat0. Jul 15 23:13:16.916728 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:13:16.987705 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 23:13:16.994721 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:13:17.300944 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 23:13:17.304865 ntpd[1969]: bind(24) AF_INET6 fe80::40b:4ff:fe30:7313%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:13:17.306504 ntpd[1969]: 15 Jul 23:13:17 ntpd[1969]: bind(24) AF_INET6 fe80::40b:4ff:fe30:7313%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:13:17.306504 ntpd[1969]: 15 Jul 23:13:17 ntpd[1969]: unable to create socket on eth0 (6) for fe80::40b:4ff:fe30:7313%2#123 Jul 15 23:13:17.306504 ntpd[1969]: 15 Jul 23:13:17 ntpd[1969]: failed to init interface for address fe80::40b:4ff:fe30:7313%2 Jul 15 23:13:17.304973 ntpd[1969]: unable to create socket on eth0 (6) for fe80::40b:4ff:fe30:7313%2#123 Jul 15 23:13:17.305002 ntpd[1969]: failed to init interface for address fe80::40b:4ff:fe30:7313%2 Jul 15 23:13:17.315397 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 23:13:17.329024 dbus-daemon[1964]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2017 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 23:13:17.343805 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 23:13:17.376682 coreos-metadata[2054]: Jul 15 23:13:17.376 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 23:13:17.381274 coreos-metadata[2054]: Jul 15 23:13:17.379 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 15 23:13:17.382308 coreos-metadata[2054]: Jul 15 23:13:17.382 INFO Fetch successful Jul 15 23:13:17.382308 coreos-metadata[2054]: Jul 15 23:13:17.382 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 15 23:13:17.387522 coreos-metadata[2054]: Jul 15 23:13:17.386 INFO Fetch successful Jul 15 23:13:17.388875 unknown[2054]: wrote ssh authorized keys file for user: core Jul 15 23:13:17.453558 containerd[1995]: time="2025-07-15T23:13:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:13:17.460273 containerd[1995]: time="2025-07-15T23:13:17.458842021Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:13:17.486215 containerd[1995]: time="2025-07-15T23:13:17.486134293Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.624µs" Jul 15 23:13:17.486638 containerd[1995]: time="2025-07-15T23:13:17.486585085Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:13:17.486785 containerd[1995]: time="2025-07-15T23:13:17.486753973Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:13:17.488626 containerd[1995]: time="2025-07-15T23:13:17.488561869Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:13:17.490694 containerd[1995]: time="2025-07-15T23:13:17.489985657Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:13:17.492087 containerd[1995]: time="2025-07-15T23:13:17.491105185Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:13:17.492087 containerd[1995]: time="2025-07-15T23:13:17.491959477Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:13:17.492579 containerd[1995]: time="2025-07-15T23:13:17.492021073Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:13:17.495115 update-ssh-keys[2149]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:13:17.496865 containerd[1995]: time="2025-07-15T23:13:17.496280269Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:13:17.496865 containerd[1995]: time="2025-07-15T23:13:17.496613797Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:13:17.496865 containerd[1995]: time="2025-07-15T23:13:17.496704973Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:13:17.496865 containerd[1995]: time="2025-07-15T23:13:17.496733413Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:13:17.498380 containerd[1995]: time="2025-07-15T23:13:17.498101713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:13:17.499828 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 23:13:17.508426 containerd[1995]: time="2025-07-15T23:13:17.499892677Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:13:17.509351 containerd[1995]: time="2025-07-15T23:13:17.509085205Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:13:17.509570 containerd[1995]: time="2025-07-15T23:13:17.509311153Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:13:17.509774 containerd[1995]: time="2025-07-15T23:13:17.509703745Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:13:17.511955 containerd[1995]: time="2025-07-15T23:13:17.511887337Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:13:17.513139 containerd[1995]: time="2025-07-15T23:13:17.512891161Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:13:17.514977 systemd[1]: Finished sshkeys.service. Jul 15 23:13:17.524279 containerd[1995]: time="2025-07-15T23:13:17.523837693Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:13:17.524279 containerd[1995]: time="2025-07-15T23:13:17.523979485Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:13:17.524279 containerd[1995]: time="2025-07-15T23:13:17.524110057Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:13:17.524279 containerd[1995]: time="2025-07-15T23:13:17.524165065Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:13:17.524279 containerd[1995]: time="2025-07-15T23:13:17.524202313Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:13:17.525271 containerd[1995]: time="2025-07-15T23:13:17.524665729Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:13:17.525271 containerd[1995]: time="2025-07-15T23:13:17.524726377Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:13:17.525271 containerd[1995]: time="2025-07-15T23:13:17.524759833Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:13:17.525271 containerd[1995]: time="2025-07-15T23:13:17.524807749Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:13:17.525271 containerd[1995]: time="2025-07-15T23:13:17.524838601Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:13:17.525271 containerd[1995]: time="2025-07-15T23:13:17.524864257Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:13:17.525271 containerd[1995]: time="2025-07-15T23:13:17.524896417Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:13:17.525271 containerd[1995]: time="2025-07-15T23:13:17.525150241Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:13:17.525271 containerd[1995]: time="2025-07-15T23:13:17.525196909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:13:17.527680 containerd[1995]: time="2025-07-15T23:13:17.526328917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:13:17.527680 containerd[1995]: time="2025-07-15T23:13:17.526418305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:13:17.527680 containerd[1995]: time="2025-07-15T23:13:17.526449805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:13:17.527680 containerd[1995]: time="2025-07-15T23:13:17.526486561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:13:17.527680 containerd[1995]: time="2025-07-15T23:13:17.526521001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:13:17.527680 containerd[1995]: time="2025-07-15T23:13:17.526551241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:13:17.527680 containerd[1995]: time="2025-07-15T23:13:17.526584565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:13:17.527680 containerd[1995]: time="2025-07-15T23:13:17.526616953Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:13:17.527680 containerd[1995]: time="2025-07-15T23:13:17.526649737Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:13:17.527680 containerd[1995]: time="2025-07-15T23:13:17.527071309Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:13:17.527680 containerd[1995]: time="2025-07-15T23:13:17.527130625Z" level=info msg="Start snapshots syncer" Jul 15 23:13:17.530286 containerd[1995]: time="2025-07-15T23:13:17.529496869Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:13:17.530286 containerd[1995]: time="2025-07-15T23:13:17.529930717Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:13:17.530646 containerd[1995]: time="2025-07-15T23:13:17.530072569Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:13:17.532271 containerd[1995]: time="2025-07-15T23:13:17.531565477Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:13:17.532654 containerd[1995]: time="2025-07-15T23:13:17.532604017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:13:17.533780 containerd[1995]: time="2025-07-15T23:13:17.533109157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:13:17.533780 containerd[1995]: time="2025-07-15T23:13:17.533170009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:13:17.533780 containerd[1995]: time="2025-07-15T23:13:17.533199937Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:13:17.533780 containerd[1995]: time="2025-07-15T23:13:17.533263321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:13:17.533780 containerd[1995]: time="2025-07-15T23:13:17.533301565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:13:17.533780 containerd[1995]: time="2025-07-15T23:13:17.533332153Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:13:17.533780 containerd[1995]: time="2025-07-15T23:13:17.533415961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:13:17.533780 containerd[1995]: time="2025-07-15T23:13:17.533465041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:13:17.533780 containerd[1995]: time="2025-07-15T23:13:17.533502841Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:13:17.534987 containerd[1995]: time="2025-07-15T23:13:17.534815017Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:13:17.536873 containerd[1995]: time="2025-07-15T23:13:17.535158169Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:13:17.536873 containerd[1995]: time="2025-07-15T23:13:17.535199005Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:13:17.536873 containerd[1995]: time="2025-07-15T23:13:17.535226689Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:13:17.536873 containerd[1995]: time="2025-07-15T23:13:17.536331397Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:13:17.536873 containerd[1995]: time="2025-07-15T23:13:17.536385913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:13:17.536873 containerd[1995]: time="2025-07-15T23:13:17.536421241Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:13:17.536873 containerd[1995]: time="2025-07-15T23:13:17.536602237Z" level=info msg="runtime interface created" Jul 15 23:13:17.536873 containerd[1995]: time="2025-07-15T23:13:17.536625637Z" level=info msg="created NRI interface" Jul 15 23:13:17.536873 containerd[1995]: time="2025-07-15T23:13:17.536650321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:13:17.536873 containerd[1995]: time="2025-07-15T23:13:17.536685337Z" level=info msg="Connect containerd service" Jul 15 23:13:17.536873 containerd[1995]: time="2025-07-15T23:13:17.536785693Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:13:17.542287 containerd[1995]: time="2025-07-15T23:13:17.540873457Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:13:17.562208 locksmithd[2033]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:13:17.578341 systemd-networkd[1867]: eth0: Gained IPv6LL Jul 15 23:13:17.596589 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:13:17.604556 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:13:17.612164 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 15 23:13:17.625280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:13:17.636367 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:13:17.880885 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:13:18.030927 polkitd[2133]: Started polkitd version 126 Jul 15 23:13:18.045776 amazon-ssm-agent[2169]: Initializing new seelog logger Jul 15 23:13:18.046375 amazon-ssm-agent[2169]: New Seelog Logger Creation Complete Jul 15 23:13:18.046375 amazon-ssm-agent[2169]: 2025/07/15 23:13:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:13:18.046375 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:13:18.055616 amazon-ssm-agent[2169]: 2025/07/15 23:13:18 processing appconfig overrides Jul 15 23:13:18.061038 amazon-ssm-agent[2169]: 2025/07/15 23:13:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:13:18.061038 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:13:18.061189 amazon-ssm-agent[2169]: 2025/07/15 23:13:18 processing appconfig overrides Jul 15 23:13:18.064832 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.0608 INFO Proxy environment variables: Jul 15 23:13:18.067444 amazon-ssm-agent[2169]: 2025/07/15 23:13:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:13:18.067444 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:13:18.067946 amazon-ssm-agent[2169]: 2025/07/15 23:13:18 processing appconfig overrides Jul 15 23:13:18.079949 polkitd[2133]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 23:13:18.086730 amazon-ssm-agent[2169]: 2025/07/15 23:13:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:13:18.086730 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:13:18.086958 amazon-ssm-agent[2169]: 2025/07/15 23:13:18 processing appconfig overrides Jul 15 23:13:18.090830 polkitd[2133]: Loading rules from directory /run/polkit-1/rules.d Jul 15 23:13:18.090947 polkitd[2133]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:13:18.091638 polkitd[2133]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 23:13:18.091723 polkitd[2133]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:13:18.091817 polkitd[2133]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 23:13:18.101324 polkitd[2133]: Finished loading, compiling and executing 2 rules Jul 15 23:13:18.102669 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 23:13:18.113659 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 23:13:18.114940 polkitd[2133]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 23:13:18.165336 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.0609 INFO no_proxy: Jul 15 23:13:18.189810 systemd-hostnamed[2017]: Hostname set to (transient) Jul 15 23:13:18.190175 systemd-resolved[1868]: System hostname changed to 'ip-172-31-19-30'. Jul 15 23:13:18.196702 containerd[1995]: time="2025-07-15T23:13:18.196293720Z" level=info msg="Start subscribing containerd event" Jul 15 23:13:18.196702 containerd[1995]: time="2025-07-15T23:13:18.196409592Z" level=info msg="Start recovering state" Jul 15 23:13:18.196702 containerd[1995]: time="2025-07-15T23:13:18.196562328Z" level=info msg="Start event monitor" Jul 15 23:13:18.196702 containerd[1995]: time="2025-07-15T23:13:18.196593792Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:13:18.196702 containerd[1995]: time="2025-07-15T23:13:18.196612152Z" level=info msg="Start streaming server" Jul 15 23:13:18.196702 containerd[1995]: time="2025-07-15T23:13:18.196634592Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:13:18.196702 containerd[1995]: time="2025-07-15T23:13:18.196657752Z" level=info msg="runtime interface starting up..." Jul 15 23:13:18.196702 containerd[1995]: time="2025-07-15T23:13:18.196673328Z" level=info msg="starting plugins..." Jul 15 23:13:18.196702 containerd[1995]: time="2025-07-15T23:13:18.196701960Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:13:18.201302 containerd[1995]: time="2025-07-15T23:13:18.197182560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:13:18.201302 containerd[1995]: time="2025-07-15T23:13:18.199104156Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:13:18.202627 containerd[1995]: time="2025-07-15T23:13:18.202527936Z" level=info msg="containerd successfully booted in 0.749998s" Jul 15 23:13:18.202623 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:13:18.265338 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.0609 INFO https_proxy: Jul 15 23:13:18.365892 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.0609 INFO http_proxy: Jul 15 23:13:18.470348 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.0611 INFO Checking if agent identity type OnPrem can be assumed Jul 15 23:13:18.569644 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.0650 INFO Checking if agent identity type EC2 can be assumed Jul 15 23:13:18.668789 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.3709 INFO Agent will take identity from EC2 Jul 15 23:13:18.769261 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.3784 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 15 23:13:18.869090 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.3785 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 15 23:13:18.887735 tar[1985]: linux-arm64/LICENSE Jul 15 23:13:18.888363 tar[1985]: linux-arm64/README.md Jul 15 23:13:18.944391 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:13:18.971391 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.3785 INFO [amazon-ssm-agent] Starting Core Agent Jul 15 23:13:19.072541 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.3785 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 15 23:13:19.119914 sshd_keygen[2014]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:13:19.173513 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.3785 INFO [Registrar] Starting registrar module Jul 15 23:13:19.196395 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:13:19.208747 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:13:19.219471 systemd[1]: Started sshd@0-172.31.19.30:22-139.178.89.65:57226.service - OpenSSH per-connection server daemon (139.178.89.65:57226). Jul 15 23:13:19.273852 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.3835 INFO [EC2Identity] Checking disk for registration info Jul 15 23:13:19.279216 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:13:19.279827 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:13:19.291804 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:13:19.344291 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:13:19.354548 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:13:19.365139 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 23:13:19.369660 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:13:19.376389 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.3836 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 15 23:13:19.476626 amazon-ssm-agent[2169]: 2025-07-15 23:13:18.3836 INFO [EC2Identity] Generating registration keypair Jul 15 23:13:19.513335 sshd[2219]: Accepted publickey for core from 139.178.89.65 port 57226 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:13:19.521081 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:13:19.547707 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:13:19.553975 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:13:19.597682 systemd-logind[1974]: New session 1 of user core. Jul 15 23:13:19.625298 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:13:19.638970 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:13:19.669644 (systemd)[2230]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:13:19.678443 systemd-logind[1974]: New session c1 of user core. Jul 15 23:13:19.700489 amazon-ssm-agent[2169]: 2025-07-15 23:13:19.7003 INFO [EC2Identity] Checking write access before registering Jul 15 23:13:19.753318 amazon-ssm-agent[2169]: 2025/07/15 23:13:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:13:19.753318 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:13:19.753318 amazon-ssm-agent[2169]: 2025/07/15 23:13:19 processing appconfig overrides Jul 15 23:13:19.788585 amazon-ssm-agent[2169]: 2025-07-15 23:13:19.7016 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 15 23:13:19.788585 amazon-ssm-agent[2169]: 2025-07-15 23:13:19.7499 INFO [EC2Identity] EC2 registration was successful. Jul 15 23:13:19.788768 amazon-ssm-agent[2169]: 2025-07-15 23:13:19.7499 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 15 23:13:19.788768 amazon-ssm-agent[2169]: 2025-07-15 23:13:19.7505 INFO [CredentialRefresher] credentialRefresher has started Jul 15 23:13:19.788768 amazon-ssm-agent[2169]: 2025-07-15 23:13:19.7523 INFO [CredentialRefresher] Starting credentials refresher loop Jul 15 23:13:19.788768 amazon-ssm-agent[2169]: 2025-07-15 23:13:19.7881 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 15 23:13:19.788768 amazon-ssm-agent[2169]: 2025-07-15 23:13:19.7884 INFO [CredentialRefresher] Credentials ready Jul 15 23:13:19.802140 amazon-ssm-agent[2169]: 2025-07-15 23:13:19.7887 INFO [CredentialRefresher] Next credential rotation will be in 29.9999902349 minutes Jul 15 23:13:20.031830 systemd[2230]: Queued start job for default target default.target. Jul 15 23:13:20.040905 systemd[2230]: Created slice app.slice - User Application Slice. Jul 15 23:13:20.040985 systemd[2230]: Reached target paths.target - Paths. Jul 15 23:13:20.041098 systemd[2230]: Reached target timers.target - Timers. Jul 15 23:13:20.047434 systemd[2230]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:13:20.065521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:13:20.069582 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:13:20.087695 systemd[2230]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:13:20.087909 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:13:20.088750 systemd[2230]: Reached target sockets.target - Sockets. Jul 15 23:13:20.088888 systemd[2230]: Reached target basic.target - Basic System. Jul 15 23:13:20.089127 systemd[2230]: Reached target default.target - Main User Target. Jul 15 23:13:20.089210 systemd[2230]: Startup finished in 381ms. Jul 15 23:13:20.089496 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:13:20.096216 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:13:20.100315 systemd[1]: Startup finished in 3.777s (kernel) + 10.094s (initrd) + 10.378s (userspace) = 24.249s. Jul 15 23:13:20.272735 systemd[1]: Started sshd@1-172.31.19.30:22-139.178.89.65:56050.service - OpenSSH per-connection server daemon (139.178.89.65:56050). Jul 15 23:13:20.304946 ntpd[1969]: Listen normally on 7 eth0 [fe80::40b:4ff:fe30:7313%2]:123 Jul 15 23:13:20.306722 ntpd[1969]: 15 Jul 23:13:20 ntpd[1969]: Listen normally on 7 eth0 [fe80::40b:4ff:fe30:7313%2]:123 Jul 15 23:13:20.487299 sshd[2251]: Accepted publickey for core from 139.178.89.65 port 56050 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:13:20.489348 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:13:20.502803 systemd-logind[1974]: New session 2 of user core. Jul 15 23:13:20.512101 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:13:20.644314 sshd[2258]: Connection closed by 139.178.89.65 port 56050 Jul 15 23:13:20.645267 sshd-session[2251]: pam_unix(sshd:session): session closed for user core Jul 15 23:13:20.656488 systemd[1]: sshd@1-172.31.19.30:22-139.178.89.65:56050.service: Deactivated successfully. Jul 15 23:13:20.662061 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:13:20.667494 systemd-logind[1974]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:13:20.685784 systemd[1]: Started sshd@2-172.31.19.30:22-139.178.89.65:56066.service - OpenSSH per-connection server daemon (139.178.89.65:56066). Jul 15 23:13:20.753190 systemd-logind[1974]: Removed session 2. Jul 15 23:13:20.821616 amazon-ssm-agent[2169]: 2025-07-15 23:13:20.8200 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 15 23:13:20.885109 sshd[2264]: Accepted publickey for core from 139.178.89.65 port 56066 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:13:20.891057 sshd-session[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:13:20.907864 systemd-logind[1974]: New session 3 of user core. Jul 15 23:13:20.915566 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:13:20.923905 amazon-ssm-agent[2169]: 2025-07-15 23:13:20.8241 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2268) started Jul 15 23:13:21.023074 amazon-ssm-agent[2169]: 2025-07-15 23:13:20.8242 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 15 23:13:21.044304 sshd[2272]: Connection closed by 139.178.89.65 port 56066 Jul 15 23:13:21.043614 sshd-session[2264]: pam_unix(sshd:session): session closed for user core Jul 15 23:13:21.055087 systemd[1]: sshd@2-172.31.19.30:22-139.178.89.65:56066.service: Deactivated successfully. Jul 15 23:13:21.064881 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:13:21.071432 systemd-logind[1974]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:13:21.093618 systemd[1]: Started sshd@3-172.31.19.30:22-139.178.89.65:56068.service - OpenSSH per-connection server daemon (139.178.89.65:56068). Jul 15 23:13:21.101517 systemd-logind[1974]: Removed session 3. Jul 15 23:13:21.307262 sshd[2280]: Accepted publickey for core from 139.178.89.65 port 56068 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:13:21.309379 sshd-session[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:13:21.321360 systemd-logind[1974]: New session 4 of user core. Jul 15 23:13:21.325543 kubelet[2242]: E0715 23:13:21.325458 2242 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:13:21.331592 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:13:21.333433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:13:21.334014 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:13:21.335099 systemd[1]: kubelet.service: Consumed 1.589s CPU time, 257.2M memory peak. Jul 15 23:13:21.463859 sshd[2287]: Connection closed by 139.178.89.65 port 56068 Jul 15 23:13:21.464542 sshd-session[2280]: pam_unix(sshd:session): session closed for user core Jul 15 23:13:21.476572 systemd[1]: sshd@3-172.31.19.30:22-139.178.89.65:56068.service: Deactivated successfully. Jul 15 23:13:21.482565 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:13:21.484671 systemd-logind[1974]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:13:21.503750 systemd[1]: Started sshd@4-172.31.19.30:22-139.178.89.65:56078.service - OpenSSH per-connection server daemon (139.178.89.65:56078). Jul 15 23:13:21.506952 systemd-logind[1974]: Removed session 4. Jul 15 23:13:21.705509 sshd[2294]: Accepted publickey for core from 139.178.89.65 port 56078 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:13:21.712686 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:13:21.729868 systemd-logind[1974]: New session 5 of user core. Jul 15 23:13:21.741569 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:13:21.863675 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:13:21.865031 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:13:21.882974 sudo[2297]: pam_unix(sudo:session): session closed for user root Jul 15 23:13:21.907279 sshd[2296]: Connection closed by 139.178.89.65 port 56078 Jul 15 23:13:21.908608 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Jul 15 23:13:21.917485 systemd[1]: sshd@4-172.31.19.30:22-139.178.89.65:56078.service: Deactivated successfully. Jul 15 23:13:21.922551 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:13:21.924487 systemd-logind[1974]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:13:21.928106 systemd-logind[1974]: Removed session 5. Jul 15 23:13:21.945138 systemd[1]: Started sshd@5-172.31.19.30:22-139.178.89.65:56092.service - OpenSSH per-connection server daemon (139.178.89.65:56092). Jul 15 23:13:22.144961 sshd[2303]: Accepted publickey for core from 139.178.89.65 port 56092 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:13:22.148470 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:13:22.157898 systemd-logind[1974]: New session 6 of user core. Jul 15 23:13:22.166582 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:13:22.272945 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:13:22.274287 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:13:22.284753 sudo[2307]: pam_unix(sudo:session): session closed for user root Jul 15 23:13:22.295800 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:13:22.296546 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:13:22.315601 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:13:22.401864 augenrules[2329]: No rules Jul 15 23:13:22.404989 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:13:22.407367 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:13:22.409124 sudo[2306]: pam_unix(sudo:session): session closed for user root Jul 15 23:13:22.432295 sshd[2305]: Connection closed by 139.178.89.65 port 56092 Jul 15 23:13:22.433212 sshd-session[2303]: pam_unix(sshd:session): session closed for user core Jul 15 23:13:22.441702 systemd[1]: sshd@5-172.31.19.30:22-139.178.89.65:56092.service: Deactivated successfully. Jul 15 23:13:22.446620 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:13:22.451006 systemd-logind[1974]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:13:22.454103 systemd-logind[1974]: Removed session 6. Jul 15 23:13:22.470495 systemd[1]: Started sshd@6-172.31.19.30:22-139.178.89.65:56096.service - OpenSSH per-connection server daemon (139.178.89.65:56096). Jul 15 23:13:22.686324 sshd[2338]: Accepted publickey for core from 139.178.89.65 port 56096 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:13:22.689202 sshd-session[2338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:13:22.698525 systemd-logind[1974]: New session 7 of user core. Jul 15 23:13:22.715601 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:13:22.823844 sudo[2341]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:13:22.824623 sudo[2341]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:13:22.882413 systemd-resolved[1868]: Clock change detected. Flushing caches. Jul 15 23:13:23.053774 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:13:23.078492 (dockerd)[2358]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:13:23.523091 dockerd[2358]: time="2025-07-15T23:13:23.522992831Z" level=info msg="Starting up" Jul 15 23:13:23.526850 dockerd[2358]: time="2025-07-15T23:13:23.526774487Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:13:23.678658 systemd[1]: var-lib-docker-metacopy\x2dcheck12498182-merged.mount: Deactivated successfully. Jul 15 23:13:23.688376 dockerd[2358]: time="2025-07-15T23:13:23.688310520Z" level=info msg="Loading containers: start." Jul 15 23:13:23.701934 kernel: Initializing XFRM netlink socket Jul 15 23:13:24.033365 (udev-worker)[2379]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:13:24.114340 systemd-networkd[1867]: docker0: Link UP Jul 15 23:13:24.120246 dockerd[2358]: time="2025-07-15T23:13:24.120123214Z" level=info msg="Loading containers: done." Jul 15 23:13:24.147540 dockerd[2358]: time="2025-07-15T23:13:24.146291926Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:13:24.147540 dockerd[2358]: time="2025-07-15T23:13:24.146425282Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:13:24.147825 dockerd[2358]: time="2025-07-15T23:13:24.147691966Z" level=info msg="Initializing buildkit" Jul 15 23:13:24.187322 dockerd[2358]: time="2025-07-15T23:13:24.187255294Z" level=info msg="Completed buildkit initialization" Jul 15 23:13:24.204768 dockerd[2358]: time="2025-07-15T23:13:24.204690106Z" level=info msg="Daemon has completed initialization" Jul 15 23:13:24.205138 dockerd[2358]: time="2025-07-15T23:13:24.204969286Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:13:24.205278 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:13:25.353664 containerd[1995]: time="2025-07-15T23:13:25.352879728Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Jul 15 23:13:25.952502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2644396014.mount: Deactivated successfully. Jul 15 23:13:27.383616 containerd[1995]: time="2025-07-15T23:13:27.382834490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:27.386242 containerd[1995]: time="2025-07-15T23:13:27.386188670Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651813" Jul 15 23:13:27.388931 containerd[1995]: time="2025-07-15T23:13:27.388866866Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:27.395991 containerd[1995]: time="2025-07-15T23:13:27.395916002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:27.398662 containerd[1995]: time="2025-07-15T23:13:27.398543906Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 2.045591746s" Jul 15 23:13:27.398662 containerd[1995]: time="2025-07-15T23:13:27.398657150Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Jul 15 23:13:27.402117 containerd[1995]: time="2025-07-15T23:13:27.401969450Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Jul 15 23:13:28.868614 containerd[1995]: time="2025-07-15T23:13:28.868254053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:28.870947 containerd[1995]: time="2025-07-15T23:13:28.870815849Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460283" Jul 15 23:13:28.873708 containerd[1995]: time="2025-07-15T23:13:28.873625937Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:28.879871 containerd[1995]: time="2025-07-15T23:13:28.879776381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:28.883151 containerd[1995]: time="2025-07-15T23:13:28.883052201Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.480704871s" Jul 15 23:13:28.883151 containerd[1995]: time="2025-07-15T23:13:28.883128209Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Jul 15 23:13:28.884931 containerd[1995]: time="2025-07-15T23:13:28.884840777Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Jul 15 23:13:30.088933 containerd[1995]: time="2025-07-15T23:13:30.088847631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:30.090491 containerd[1995]: time="2025-07-15T23:13:30.090406035Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125089" Jul 15 23:13:30.091916 containerd[1995]: time="2025-07-15T23:13:30.091828851Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:30.097792 containerd[1995]: time="2025-07-15T23:13:30.096586131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:30.099002 containerd[1995]: time="2025-07-15T23:13:30.098944791Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.214024058s" Jul 15 23:13:30.099190 containerd[1995]: time="2025-07-15T23:13:30.099156927Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Jul 15 23:13:30.100062 containerd[1995]: time="2025-07-15T23:13:30.099886971Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Jul 15 23:13:31.161194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:13:31.165888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:13:31.448265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3952391188.mount: Deactivated successfully. Jul 15 23:13:31.561856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:13:31.577328 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:13:31.723645 kubelet[2638]: E0715 23:13:31.722331 2638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:13:31.732250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:13:31.733287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:13:31.734372 systemd[1]: kubelet.service: Consumed 352ms CPU time, 106.1M memory peak. Jul 15 23:13:32.157780 containerd[1995]: time="2025-07-15T23:13:32.157303470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:32.160008 containerd[1995]: time="2025-07-15T23:13:32.159884586Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26915993" Jul 15 23:13:32.162735 containerd[1995]: time="2025-07-15T23:13:32.162621246Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:32.167470 containerd[1995]: time="2025-07-15T23:13:32.167344686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:32.168874 containerd[1995]: time="2025-07-15T23:13:32.168653646Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 2.068398251s" Jul 15 23:13:32.168874 containerd[1995]: time="2025-07-15T23:13:32.168720450Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Jul 15 23:13:32.170240 containerd[1995]: time="2025-07-15T23:13:32.170153214Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 23:13:32.742770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298799571.mount: Deactivated successfully. Jul 15 23:13:34.036694 containerd[1995]: time="2025-07-15T23:13:34.036628867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:34.039219 containerd[1995]: time="2025-07-15T23:13:34.039128287Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 15 23:13:34.041427 containerd[1995]: time="2025-07-15T23:13:34.041362651Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:34.050329 containerd[1995]: time="2025-07-15T23:13:34.050226055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:34.053638 containerd[1995]: time="2025-07-15T23:13:34.053228947Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.882997793s" Jul 15 23:13:34.053638 containerd[1995]: time="2025-07-15T23:13:34.053312503Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 23:13:34.055056 containerd[1995]: time="2025-07-15T23:13:34.054690355Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:13:34.558785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount871910157.mount: Deactivated successfully. Jul 15 23:13:34.572917 containerd[1995]: time="2025-07-15T23:13:34.572817874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:13:34.574993 containerd[1995]: time="2025-07-15T23:13:34.574902730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 15 23:13:34.577831 containerd[1995]: time="2025-07-15T23:13:34.577714366Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:13:34.585611 containerd[1995]: time="2025-07-15T23:13:34.585492406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:13:34.589031 containerd[1995]: time="2025-07-15T23:13:34.587545474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 532.786443ms" Jul 15 23:13:34.589031 containerd[1995]: time="2025-07-15T23:13:34.588961978Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 23:13:34.590535 containerd[1995]: time="2025-07-15T23:13:34.590135350Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 23:13:35.173693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2324845647.mount: Deactivated successfully. Jul 15 23:13:37.496364 containerd[1995]: time="2025-07-15T23:13:37.496271016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:37.498525 containerd[1995]: time="2025-07-15T23:13:37.498438408Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 15 23:13:37.501133 containerd[1995]: time="2025-07-15T23:13:37.501028764Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:37.507354 containerd[1995]: time="2025-07-15T23:13:37.507225828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:37.509767 containerd[1995]: time="2025-07-15T23:13:37.509484972Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.91928685s" Jul 15 23:13:37.509767 containerd[1995]: time="2025-07-15T23:13:37.509571924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 15 23:13:41.983780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 23:13:41.990912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:13:42.373862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:13:42.388148 (kubelet)[2784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:13:42.481897 kubelet[2784]: E0715 23:13:42.481830 2784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:13:42.487979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:13:42.488625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:13:42.489955 systemd[1]: kubelet.service: Consumed 318ms CPU time, 107.4M memory peak. Jul 15 23:13:44.958048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:13:44.958408 systemd[1]: kubelet.service: Consumed 318ms CPU time, 107.4M memory peak. Jul 15 23:13:44.969918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:13:45.014808 systemd[1]: Reload requested from client PID 2798 ('systemctl') (unit session-7.scope)... Jul 15 23:13:45.015009 systemd[1]: Reloading... Jul 15 23:13:45.256614 zram_generator::config[2849]: No configuration found. Jul 15 23:13:45.469493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:13:45.735539 systemd[1]: Reloading finished in 719 ms. Jul 15 23:13:45.832298 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:13:45.832825 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:13:45.834695 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:13:45.834794 systemd[1]: kubelet.service: Consumed 226ms CPU time, 95M memory peak. Jul 15 23:13:45.838957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:13:46.171433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:13:46.187103 (kubelet)[2907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:13:46.258775 kubelet[2907]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:13:46.258775 kubelet[2907]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 23:13:46.258775 kubelet[2907]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:13:46.259330 kubelet[2907]: I0715 23:13:46.258891 2907 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:13:46.783860 kubelet[2907]: I0715 23:13:46.783797 2907 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 23:13:46.783860 kubelet[2907]: I0715 23:13:46.783843 2907 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:13:46.784488 kubelet[2907]: I0715 23:13:46.784442 2907 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 23:13:46.858644 kubelet[2907]: E0715 23:13:46.858535 2907 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.30:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:46.860281 kubelet[2907]: I0715 23:13:46.859935 2907 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:13:46.873773 kubelet[2907]: I0715 23:13:46.873740 2907 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:13:46.881636 kubelet[2907]: I0715 23:13:46.881551 2907 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:13:46.883718 kubelet[2907]: I0715 23:13:46.883646 2907 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 23:13:46.884139 kubelet[2907]: I0715 23:13:46.884069 2907 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:13:46.884428 kubelet[2907]: I0715 23:13:46.884130 2907 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-30","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:13:46.884635 kubelet[2907]: I0715 23:13:46.884548 2907 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:13:46.884635 kubelet[2907]: I0715 23:13:46.884590 2907 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 23:13:46.885113 kubelet[2907]: I0715 23:13:46.885058 2907 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:13:46.891828 kubelet[2907]: I0715 23:13:46.891757 2907 kubelet.go:408] "Attempting to sync node with API server" Jul 15 23:13:46.891828 kubelet[2907]: I0715 23:13:46.891828 2907 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:13:46.892021 kubelet[2907]: I0715 23:13:46.891871 2907 kubelet.go:314] "Adding apiserver pod source" Jul 15 23:13:46.892076 kubelet[2907]: I0715 23:13:46.892034 2907 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:13:46.905496 kubelet[2907]: W0715 23:13:46.903867 2907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-30&limit=500&resourceVersion=0": dial tcp 172.31.19.30:6443: connect: connection refused Jul 15 23:13:46.905496 kubelet[2907]: E0715 23:13:46.903976 2907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-30&limit=500&resourceVersion=0\": dial tcp 172.31.19.30:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:46.905496 kubelet[2907]: I0715 23:13:46.904249 2907 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:13:46.905496 kubelet[2907]: I0715 23:13:46.905444 2907 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:13:46.905900 kubelet[2907]: W0715 23:13:46.905853 2907 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:13:46.908455 kubelet[2907]: I0715 23:13:46.908382 2907 server.go:1274] "Started kubelet" Jul 15 23:13:46.914928 kubelet[2907]: W0715 23:13:46.914862 2907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.30:6443: connect: connection refused Jul 15 23:13:46.915533 kubelet[2907]: E0715 23:13:46.915092 2907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.30:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:46.917630 kubelet[2907]: E0715 23:13:46.915189 2907 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.30:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.30:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-30.18528fb857ae42a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-30,UID:ip-172-31-19-30,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-30,},FirstTimestamp:2025-07-15 23:13:46.908336807 +0000 UTC m=+0.715670549,LastTimestamp:2025-07-15 23:13:46.908336807 +0000 UTC m=+0.715670549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-30,}" Jul 15 23:13:46.919666 kubelet[2907]: I0715 23:13:46.919545 2907 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:13:46.922498 kubelet[2907]: I0715 23:13:46.922430 2907 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:13:46.924261 kubelet[2907]: I0715 23:13:46.924220 2907 server.go:449] "Adding debug handlers to kubelet server" Jul 15 23:13:46.928513 kubelet[2907]: I0715 23:13:46.928424 2907 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:13:46.929086 kubelet[2907]: I0715 23:13:46.929054 2907 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:13:46.934904 kubelet[2907]: I0715 23:13:46.934856 2907 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 23:13:46.935446 kubelet[2907]: E0715 23:13:46.935400 2907 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-30\" not found" Jul 15 23:13:46.938926 kubelet[2907]: I0715 23:13:46.938883 2907 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:13:46.940596 kubelet[2907]: I0715 23:13:46.938969 2907 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 23:13:46.940596 kubelet[2907]: I0715 23:13:46.939061 2907 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:13:46.941348 kubelet[2907]: W0715 23:13:46.941260 2907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.30:6443: connect: connection refused Jul 15 23:13:46.941661 kubelet[2907]: E0715 23:13:46.941589 2907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.30:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:46.942002 kubelet[2907]: E0715 23:13:46.941936 2907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-30?timeout=10s\": dial tcp 172.31.19.30:6443: connect: connection refused" interval="200ms" Jul 15 23:13:46.944307 kubelet[2907]: I0715 23:13:46.944265 2907 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:13:46.944651 kubelet[2907]: I0715 23:13:46.944613 2907 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:13:46.948640 kubelet[2907]: I0715 23:13:46.948606 2907 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:13:46.961265 kubelet[2907]: E0715 23:13:46.961228 2907 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:13:46.976999 kubelet[2907]: I0715 23:13:46.976950 2907 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 23:13:46.976999 kubelet[2907]: I0715 23:13:46.976985 2907 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 23:13:46.977204 kubelet[2907]: I0715 23:13:46.977034 2907 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:13:46.977629 kubelet[2907]: I0715 23:13:46.977520 2907 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:13:46.982388 kubelet[2907]: I0715 23:13:46.982257 2907 policy_none.go:49] "None policy: Start" Jul 15 23:13:46.983172 kubelet[2907]: I0715 23:13:46.983142 2907 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:13:46.983286 kubelet[2907]: I0715 23:13:46.983268 2907 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 23:13:46.983402 kubelet[2907]: I0715 23:13:46.983384 2907 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 23:13:46.983661 kubelet[2907]: E0715 23:13:46.983545 2907 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:13:46.986412 kubelet[2907]: W0715 23:13:46.986364 2907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.30:6443: connect: connection refused Jul 15 23:13:46.987964 kubelet[2907]: E0715 23:13:46.987896 2907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.30:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:46.987964 kubelet[2907]: I0715 23:13:46.986733 2907 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 23:13:46.988169 kubelet[2907]: I0715 23:13:46.987977 2907 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:13:47.000941 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:13:47.024118 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:13:47.032439 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:13:47.035769 kubelet[2907]: E0715 23:13:47.035631 2907 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-30\" not found" Jul 15 23:13:47.050659 kubelet[2907]: I0715 23:13:47.050503 2907 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:13:47.050907 kubelet[2907]: I0715 23:13:47.050852 2907 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:13:47.050976 kubelet[2907]: I0715 23:13:47.050890 2907 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:13:47.051682 kubelet[2907]: I0715 23:13:47.051646 2907 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:13:47.056862 kubelet[2907]: E0715 23:13:47.056759 2907 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-30\" not found" Jul 15 23:13:47.106306 systemd[1]: Created slice kubepods-burstable-pod3a43dec3bb7f88e369e79d3b74b86cf1.slice - libcontainer container kubepods-burstable-pod3a43dec3bb7f88e369e79d3b74b86cf1.slice. Jul 15 23:13:47.122834 systemd[1]: Created slice kubepods-burstable-pod86dfaed5a0c9f63dfb76b4a3793c3281.slice - libcontainer container kubepods-burstable-pod86dfaed5a0c9f63dfb76b4a3793c3281.slice. Jul 15 23:13:47.131403 systemd[1]: Created slice kubepods-burstable-pod8ef7e995825777878b88cacba1a4eff4.slice - libcontainer container kubepods-burstable-pod8ef7e995825777878b88cacba1a4eff4.slice. Jul 15 23:13:47.143042 kubelet[2907]: E0715 23:13:47.142989 2907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-30?timeout=10s\": dial tcp 172.31.19.30:6443: connect: connection refused" interval="400ms" Jul 15 23:13:47.153621 kubelet[2907]: I0715 23:13:47.153522 2907 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-30" Jul 15 23:13:47.154614 kubelet[2907]: E0715 23:13:47.154507 2907 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.30:6443/api/v1/nodes\": dial tcp 172.31.19.30:6443: connect: connection refused" node="ip-172-31-19-30" Jul 15 23:13:47.240900 kubelet[2907]: I0715 23:13:47.240831 2907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86dfaed5a0c9f63dfb76b4a3793c3281-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-30\" (UID: \"86dfaed5a0c9f63dfb76b4a3793c3281\") " pod="kube-system/kube-controller-manager-ip-172-31-19-30" Jul 15 23:13:47.240900 kubelet[2907]: I0715 23:13:47.240902 2907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ef7e995825777878b88cacba1a4eff4-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-30\" (UID: \"8ef7e995825777878b88cacba1a4eff4\") " pod="kube-system/kube-scheduler-ip-172-31-19-30" Jul 15 23:13:47.241114 kubelet[2907]: I0715 23:13:47.240942 2907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a43dec3bb7f88e369e79d3b74b86cf1-ca-certs\") pod \"kube-apiserver-ip-172-31-19-30\" (UID: \"3a43dec3bb7f88e369e79d3b74b86cf1\") " pod="kube-system/kube-apiserver-ip-172-31-19-30" Jul 15 23:13:47.241114 kubelet[2907]: I0715 23:13:47.240985 2907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a43dec3bb7f88e369e79d3b74b86cf1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-30\" (UID: \"3a43dec3bb7f88e369e79d3b74b86cf1\") " pod="kube-system/kube-apiserver-ip-172-31-19-30" Jul 15 23:13:47.241114 kubelet[2907]: I0715 23:13:47.241040 2907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86dfaed5a0c9f63dfb76b4a3793c3281-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-30\" (UID: \"86dfaed5a0c9f63dfb76b4a3793c3281\") " pod="kube-system/kube-controller-manager-ip-172-31-19-30" Jul 15 23:13:47.241114 kubelet[2907]: I0715 23:13:47.241082 2907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/86dfaed5a0c9f63dfb76b4a3793c3281-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-30\" (UID: \"86dfaed5a0c9f63dfb76b4a3793c3281\") " pod="kube-system/kube-controller-manager-ip-172-31-19-30" Jul 15 23:13:47.241318 kubelet[2907]: I0715 23:13:47.241117 2907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86dfaed5a0c9f63dfb76b4a3793c3281-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-30\" (UID: \"86dfaed5a0c9f63dfb76b4a3793c3281\") " pod="kube-system/kube-controller-manager-ip-172-31-19-30" Jul 15 23:13:47.241318 kubelet[2907]: I0715 23:13:47.241151 2907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86dfaed5a0c9f63dfb76b4a3793c3281-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-30\" (UID: \"86dfaed5a0c9f63dfb76b4a3793c3281\") " pod="kube-system/kube-controller-manager-ip-172-31-19-30" Jul 15 23:13:47.241318 kubelet[2907]: I0715 23:13:47.241188 2907 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a43dec3bb7f88e369e79d3b74b86cf1-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-30\" (UID: \"3a43dec3bb7f88e369e79d3b74b86cf1\") " pod="kube-system/kube-apiserver-ip-172-31-19-30" Jul 15 23:13:47.358864 kubelet[2907]: I0715 23:13:47.358539 2907 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-30" Jul 15 23:13:47.359363 kubelet[2907]: E0715 23:13:47.359186 2907 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.30:6443/api/v1/nodes\": dial tcp 172.31.19.30:6443: connect: connection refused" node="ip-172-31-19-30" Jul 15 23:13:47.420122 containerd[1995]: time="2025-07-15T23:13:47.419992545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-30,Uid:3a43dec3bb7f88e369e79d3b74b86cf1,Namespace:kube-system,Attempt:0,}" Jul 15 23:13:47.429994 containerd[1995]: time="2025-07-15T23:13:47.429754017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-30,Uid:86dfaed5a0c9f63dfb76b4a3793c3281,Namespace:kube-system,Attempt:0,}" Jul 15 23:13:47.438990 containerd[1995]: time="2025-07-15T23:13:47.438942969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-30,Uid:8ef7e995825777878b88cacba1a4eff4,Namespace:kube-system,Attempt:0,}" Jul 15 23:13:47.511763 containerd[1995]: time="2025-07-15T23:13:47.511446754Z" level=info msg="connecting to shim d935c5b6f93062838f75f06c9aaedca2e05b5a018f2946bf6cab3cc3e6f66377" address="unix:///run/containerd/s/ec2ae781bffb2defa98b3fa94499e3beb4b639246067724028c5e44cbaff39fd" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:13:47.512408 containerd[1995]: time="2025-07-15T23:13:47.512089450Z" level=info msg="connecting to shim ded9d1f56d3c57183918f0b17cee1ae549dfa42788ba3c93642df2301352eadb" address="unix:///run/containerd/s/da9233b9067f41d88f94f9ce90a71b44e853901e47546ef86e9895b384e077fc" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:13:47.544406 kubelet[2907]: E0715 23:13:47.544297 2907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-30?timeout=10s\": dial tcp 172.31.19.30:6443: connect: connection refused" interval="800ms" Jul 15 23:13:47.564617 containerd[1995]: time="2025-07-15T23:13:47.563935090Z" level=info msg="connecting to shim 49f6d6f48bf1982501d34e0d14e657f8c7064a7ce6e90ce33cf289c6b34cfd12" address="unix:///run/containerd/s/92c3a2ab358097b0776fcb71d511e27abc7ecf3ba259aff8836efad8848b89c4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:13:47.620904 systemd[1]: Started cri-containerd-49f6d6f48bf1982501d34e0d14e657f8c7064a7ce6e90ce33cf289c6b34cfd12.scope - libcontainer container 49f6d6f48bf1982501d34e0d14e657f8c7064a7ce6e90ce33cf289c6b34cfd12. Jul 15 23:13:47.625040 systemd[1]: Started cri-containerd-d935c5b6f93062838f75f06c9aaedca2e05b5a018f2946bf6cab3cc3e6f66377.scope - libcontainer container d935c5b6f93062838f75f06c9aaedca2e05b5a018f2946bf6cab3cc3e6f66377. Jul 15 23:13:47.628644 systemd[1]: Started cri-containerd-ded9d1f56d3c57183918f0b17cee1ae549dfa42788ba3c93642df2301352eadb.scope - libcontainer container ded9d1f56d3c57183918f0b17cee1ae549dfa42788ba3c93642df2301352eadb. Jul 15 23:13:47.782179 kubelet[2907]: I0715 23:13:47.782111 2907 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-30" Jul 15 23:13:47.786672 kubelet[2907]: E0715 23:13:47.786600 2907 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.30:6443/api/v1/nodes\": dial tcp 172.31.19.30:6443: connect: connection refused" node="ip-172-31-19-30" Jul 15 23:13:47.813101 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 23:13:47.845207 containerd[1995]: time="2025-07-15T23:13:47.845137547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-30,Uid:8ef7e995825777878b88cacba1a4eff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"49f6d6f48bf1982501d34e0d14e657f8c7064a7ce6e90ce33cf289c6b34cfd12\"" Jul 15 23:13:47.853611 containerd[1995]: time="2025-07-15T23:13:47.853353540Z" level=info msg="CreateContainer within sandbox \"49f6d6f48bf1982501d34e0d14e657f8c7064a7ce6e90ce33cf289c6b34cfd12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:13:47.868017 kubelet[2907]: W0715 23:13:47.867871 2907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-30&limit=500&resourceVersion=0": dial tcp 172.31.19.30:6443: connect: connection refused Jul 15 23:13:47.868017 kubelet[2907]: E0715 23:13:47.867971 2907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-30&limit=500&resourceVersion=0\": dial tcp 172.31.19.30:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:47.868202 containerd[1995]: time="2025-07-15T23:13:47.868151100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-30,Uid:86dfaed5a0c9f63dfb76b4a3793c3281,Namespace:kube-system,Attempt:0,} returns sandbox id \"d935c5b6f93062838f75f06c9aaedca2e05b5a018f2946bf6cab3cc3e6f66377\"" Jul 15 23:13:47.875340 containerd[1995]: time="2025-07-15T23:13:47.874907208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-30,Uid:3a43dec3bb7f88e369e79d3b74b86cf1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ded9d1f56d3c57183918f0b17cee1ae549dfa42788ba3c93642df2301352eadb\"" Jul 15 23:13:47.878354 containerd[1995]: time="2025-07-15T23:13:47.877853232Z" level=info msg="CreateContainer within sandbox \"d935c5b6f93062838f75f06c9aaedca2e05b5a018f2946bf6cab3cc3e6f66377\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:13:47.883110 containerd[1995]: time="2025-07-15T23:13:47.883041264Z" level=info msg="CreateContainer within sandbox \"ded9d1f56d3c57183918f0b17cee1ae549dfa42788ba3c93642df2301352eadb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:13:47.888636 containerd[1995]: time="2025-07-15T23:13:47.888423468Z" level=info msg="Container c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:47.897218 containerd[1995]: time="2025-07-15T23:13:47.897036072Z" level=info msg="Container 6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:47.915260 containerd[1995]: time="2025-07-15T23:13:47.915179232Z" level=info msg="CreateContainer within sandbox \"d935c5b6f93062838f75f06c9aaedca2e05b5a018f2946bf6cab3cc3e6f66377\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9\"" Jul 15 23:13:47.916917 containerd[1995]: time="2025-07-15T23:13:47.916846992Z" level=info msg="StartContainer for \"6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9\"" Jul 15 23:13:47.918695 containerd[1995]: time="2025-07-15T23:13:47.918609732Z" level=info msg="Container 92a0299eb105d1ea3632de832dabcbb29b2becff221b8729caf64fcff0df75d5: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:47.919362 containerd[1995]: time="2025-07-15T23:13:47.919206396Z" level=info msg="connecting to shim 6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9" address="unix:///run/containerd/s/ec2ae781bffb2defa98b3fa94499e3beb4b639246067724028c5e44cbaff39fd" protocol=ttrpc version=3 Jul 15 23:13:47.927442 containerd[1995]: time="2025-07-15T23:13:47.927353652Z" level=info msg="CreateContainer within sandbox \"49f6d6f48bf1982501d34e0d14e657f8c7064a7ce6e90ce33cf289c6b34cfd12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a\"" Jul 15 23:13:47.929317 containerd[1995]: time="2025-07-15T23:13:47.929235792Z" level=info msg="StartContainer for \"c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a\"" Jul 15 23:13:47.932296 containerd[1995]: time="2025-07-15T23:13:47.932203044Z" level=info msg="connecting to shim c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a" address="unix:///run/containerd/s/92c3a2ab358097b0776fcb71d511e27abc7ecf3ba259aff8836efad8848b89c4" protocol=ttrpc version=3 Jul 15 23:13:47.942661 containerd[1995]: time="2025-07-15T23:13:47.942304752Z" level=info msg="CreateContainer within sandbox \"ded9d1f56d3c57183918f0b17cee1ae549dfa42788ba3c93642df2301352eadb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"92a0299eb105d1ea3632de832dabcbb29b2becff221b8729caf64fcff0df75d5\"" Jul 15 23:13:47.944312 containerd[1995]: time="2025-07-15T23:13:47.944242692Z" level=info msg="StartContainer for \"92a0299eb105d1ea3632de832dabcbb29b2becff221b8729caf64fcff0df75d5\"" Jul 15 23:13:47.951670 containerd[1995]: time="2025-07-15T23:13:47.951324180Z" level=info msg="connecting to shim 92a0299eb105d1ea3632de832dabcbb29b2becff221b8729caf64fcff0df75d5" address="unix:///run/containerd/s/da9233b9067f41d88f94f9ce90a71b44e853901e47546ef86e9895b384e077fc" protocol=ttrpc version=3 Jul 15 23:13:47.977875 systemd[1]: Started cri-containerd-c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a.scope - libcontainer container c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a. Jul 15 23:13:47.995892 systemd[1]: Started cri-containerd-6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9.scope - libcontainer container 6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9. Jul 15 23:13:48.011329 kubelet[2907]: W0715 23:13:48.011107 2907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.30:6443: connect: connection refused Jul 15 23:13:48.012053 kubelet[2907]: E0715 23:13:48.011743 2907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.30:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:48.030258 systemd[1]: Started cri-containerd-92a0299eb105d1ea3632de832dabcbb29b2becff221b8729caf64fcff0df75d5.scope - libcontainer container 92a0299eb105d1ea3632de832dabcbb29b2becff221b8729caf64fcff0df75d5. Jul 15 23:13:48.097519 kubelet[2907]: W0715 23:13:48.097407 2907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.30:6443: connect: connection refused Jul 15 23:13:48.098688 kubelet[2907]: E0715 23:13:48.098631 2907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.30:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:48.176677 containerd[1995]: time="2025-07-15T23:13:48.175796349Z" level=info msg="StartContainer for \"6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9\" returns successfully" Jul 15 23:13:48.201788 containerd[1995]: time="2025-07-15T23:13:48.201740181Z" level=info msg="StartContainer for \"c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a\" returns successfully" Jul 15 23:13:48.243619 containerd[1995]: time="2025-07-15T23:13:48.243533745Z" level=info msg="StartContainer for \"92a0299eb105d1ea3632de832dabcbb29b2becff221b8729caf64fcff0df75d5\" returns successfully" Jul 15 23:13:48.347285 kubelet[2907]: E0715 23:13:48.347190 2907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-30?timeout=10s\": dial tcp 172.31.19.30:6443: connect: connection refused" interval="1.6s" Jul 15 23:13:48.364149 kubelet[2907]: W0715 23:13:48.364042 2907 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.30:6443: connect: connection refused Jul 15 23:13:48.364677 kubelet[2907]: E0715 23:13:48.364157 2907 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.30:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:48.591382 kubelet[2907]: I0715 23:13:48.590778 2907 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-30" Jul 15 23:13:51.915791 kubelet[2907]: I0715 23:13:51.915734 2907 apiserver.go:52] "Watching apiserver" Jul 15 23:13:51.929859 kubelet[2907]: E0715 23:13:51.929755 2907 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-30\" not found" node="ip-172-31-19-30" Jul 15 23:13:51.940201 kubelet[2907]: I0715 23:13:51.940130 2907 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 23:13:52.057993 kubelet[2907]: I0715 23:13:52.057600 2907 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-30" Jul 15 23:13:52.057993 kubelet[2907]: E0715 23:13:52.057650 2907 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-19-30\": node \"ip-172-31-19-30\" not found" Jul 15 23:13:52.166002 kubelet[2907]: E0715 23:13:52.165868 2907 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-30\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-30" Jul 15 23:13:54.671166 systemd[1]: Reload requested from client PID 3180 ('systemctl') (unit session-7.scope)... Jul 15 23:13:54.671738 systemd[1]: Reloading... Jul 15 23:13:54.911601 zram_generator::config[3227]: No configuration found. Jul 15 23:13:55.174015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:13:55.480607 systemd[1]: Reloading finished in 808 ms. Jul 15 23:13:55.527524 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:13:55.529829 kubelet[2907]: I0715 23:13:55.529520 2907 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:13:55.547757 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:13:55.548385 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:13:55.548605 systemd[1]: kubelet.service: Consumed 1.476s CPU time, 128M memory peak. Jul 15 23:13:55.555089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:13:55.910717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:13:55.925679 (kubelet)[3284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:13:56.025640 kubelet[3284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:13:56.027593 kubelet[3284]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 23:13:56.027593 kubelet[3284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:13:56.027593 kubelet[3284]: I0715 23:13:56.026320 3284 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:13:56.040776 kubelet[3284]: I0715 23:13:56.040734 3284 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 23:13:56.040986 kubelet[3284]: I0715 23:13:56.040966 3284 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:13:56.041632 kubelet[3284]: I0715 23:13:56.041515 3284 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 23:13:56.044544 kubelet[3284]: I0715 23:13:56.044508 3284 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 23:13:56.049277 kubelet[3284]: I0715 23:13:56.049232 3284 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:13:56.064640 kubelet[3284]: I0715 23:13:56.064550 3284 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:13:56.073235 kubelet[3284]: I0715 23:13:56.073169 3284 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:13:56.073719 kubelet[3284]: I0715 23:13:56.073410 3284 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 23:13:56.073810 kubelet[3284]: I0715 23:13:56.073700 3284 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:13:56.074097 kubelet[3284]: I0715 23:13:56.073749 3284 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-30","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:13:56.074097 kubelet[3284]: I0715 23:13:56.074096 3284 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:13:56.074318 kubelet[3284]: I0715 23:13:56.074117 3284 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 23:13:56.074318 kubelet[3284]: I0715 23:13:56.074203 3284 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:13:56.076134 kubelet[3284]: I0715 23:13:56.074389 3284 kubelet.go:408] "Attempting to sync node with API server" Jul 15 23:13:56.076134 kubelet[3284]: I0715 23:13:56.074413 3284 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:13:56.076134 kubelet[3284]: I0715 23:13:56.074446 3284 kubelet.go:314] "Adding apiserver pod source" Jul 15 23:13:56.076134 kubelet[3284]: I0715 23:13:56.074473 3284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:13:56.078190 kubelet[3284]: I0715 23:13:56.078011 3284 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:13:56.089838 kubelet[3284]: I0715 23:13:56.089360 3284 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:13:56.092938 kubelet[3284]: I0715 23:13:56.092617 3284 server.go:1274] "Started kubelet" Jul 15 23:13:56.102608 kubelet[3284]: I0715 23:13:56.101338 3284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:13:56.114525 kubelet[3284]: I0715 23:13:56.114460 3284 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:13:56.155435 kubelet[3284]: I0715 23:13:56.155394 3284 server.go:449] "Adding debug handlers to kubelet server" Jul 15 23:13:56.159590 kubelet[3284]: I0715 23:13:56.122112 3284 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 23:13:56.165325 kubelet[3284]: I0715 23:13:56.115332 3284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:13:56.165639 kubelet[3284]: I0715 23:13:56.122132 3284 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 23:13:56.166310 kubelet[3284]: I0715 23:13:56.166288 3284 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:13:56.167198 kubelet[3284]: I0715 23:13:56.150468 3284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:13:56.173756 kubelet[3284]: I0715 23:13:56.173712 3284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:13:56.173980 kubelet[3284]: I0715 23:13:56.173955 3284 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 23:13:56.174099 kubelet[3284]: I0715 23:13:56.174079 3284 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 23:13:56.174269 kubelet[3284]: E0715 23:13:56.174229 3284 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:13:56.174516 kubelet[3284]: E0715 23:13:56.122342 3284 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-30\" not found" Jul 15 23:13:56.175011 kubelet[3284]: I0715 23:13:56.139679 3284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:13:56.176464 kubelet[3284]: I0715 23:13:56.175379 3284 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:13:56.209124 kubelet[3284]: E0715 23:13:56.209085 3284 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:13:56.218589 kubelet[3284]: I0715 23:13:56.218058 3284 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:13:56.218773 kubelet[3284]: I0715 23:13:56.218747 3284 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:13:56.219008 kubelet[3284]: I0715 23:13:56.218975 3284 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:13:56.275122 kubelet[3284]: E0715 23:13:56.275016 3284 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:13:56.335547 kubelet[3284]: I0715 23:13:56.335497 3284 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 23:13:56.335547 kubelet[3284]: I0715 23:13:56.335533 3284 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 23:13:56.335755 kubelet[3284]: I0715 23:13:56.335591 3284 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:13:56.336198 kubelet[3284]: I0715 23:13:56.335846 3284 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:13:56.336198 kubelet[3284]: I0715 23:13:56.335879 3284 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:13:56.336198 kubelet[3284]: I0715 23:13:56.335915 3284 policy_none.go:49] "None policy: Start" Jul 15 23:13:56.337483 kubelet[3284]: I0715 23:13:56.337340 3284 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 23:13:56.337483 kubelet[3284]: I0715 23:13:56.337389 3284 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:13:56.337703 kubelet[3284]: I0715 23:13:56.337684 3284 state_mem.go:75] "Updated machine memory state" Jul 15 23:13:56.352047 kubelet[3284]: I0715 23:13:56.351988 3284 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:13:56.354813 kubelet[3284]: I0715 23:13:56.354783 3284 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:13:56.355687 kubelet[3284]: I0715 23:13:56.355473 3284 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:13:56.357845 kubelet[3284]: I0715 23:13:56.357787 3284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:13:56.481333 kubelet[3284]: I0715 23:13:56.481104 3284 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-30" Jul 15 23:13:56.505922 kubelet[3284]: E0715 23:13:56.505523 3284 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-19-30\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-30" Jul 15 23:13:56.516872 kubelet[3284]: I0715 23:13:56.516754 3284 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-19-30" Jul 15 23:13:56.516872 kubelet[3284]: I0715 23:13:56.516876 3284 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-30" Jul 15 23:13:56.575945 kubelet[3284]: I0715 23:13:56.575831 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a43dec3bb7f88e369e79d3b74b86cf1-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-30\" (UID: \"3a43dec3bb7f88e369e79d3b74b86cf1\") " pod="kube-system/kube-apiserver-ip-172-31-19-30" Jul 15 23:13:56.575945 kubelet[3284]: I0715 23:13:56.575894 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86dfaed5a0c9f63dfb76b4a3793c3281-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-30\" (UID: \"86dfaed5a0c9f63dfb76b4a3793c3281\") " pod="kube-system/kube-controller-manager-ip-172-31-19-30" Jul 15 23:13:56.576175 kubelet[3284]: I0715 23:13:56.576021 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86dfaed5a0c9f63dfb76b4a3793c3281-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-30\" (UID: \"86dfaed5a0c9f63dfb76b4a3793c3281\") " pod="kube-system/kube-controller-manager-ip-172-31-19-30" Jul 15 23:13:56.576175 kubelet[3284]: I0715 23:13:56.576121 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86dfaed5a0c9f63dfb76b4a3793c3281-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-30\" (UID: \"86dfaed5a0c9f63dfb76b4a3793c3281\") " pod="kube-system/kube-controller-manager-ip-172-31-19-30" Jul 15 23:13:56.576287 kubelet[3284]: I0715 23:13:56.576194 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a43dec3bb7f88e369e79d3b74b86cf1-ca-certs\") pod \"kube-apiserver-ip-172-31-19-30\" (UID: \"3a43dec3bb7f88e369e79d3b74b86cf1\") " pod="kube-system/kube-apiserver-ip-172-31-19-30" Jul 15 23:13:56.576341 kubelet[3284]: I0715 23:13:56.576283 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a43dec3bb7f88e369e79d3b74b86cf1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-30\" (UID: \"3a43dec3bb7f88e369e79d3b74b86cf1\") " pod="kube-system/kube-apiserver-ip-172-31-19-30" Jul 15 23:13:56.576403 kubelet[3284]: I0715 23:13:56.576376 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/86dfaed5a0c9f63dfb76b4a3793c3281-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-30\" (UID: \"86dfaed5a0c9f63dfb76b4a3793c3281\") " pod="kube-system/kube-controller-manager-ip-172-31-19-30" Jul 15 23:13:56.577525 kubelet[3284]: I0715 23:13:56.576465 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86dfaed5a0c9f63dfb76b4a3793c3281-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-30\" (UID: \"86dfaed5a0c9f63dfb76b4a3793c3281\") " pod="kube-system/kube-controller-manager-ip-172-31-19-30" Jul 15 23:13:56.577525 kubelet[3284]: I0715 23:13:56.576616 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ef7e995825777878b88cacba1a4eff4-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-30\" (UID: \"8ef7e995825777878b88cacba1a4eff4\") " pod="kube-system/kube-scheduler-ip-172-31-19-30" Jul 15 23:13:57.075148 kubelet[3284]: I0715 23:13:57.075082 3284 apiserver.go:52] "Watching apiserver" Jul 15 23:13:57.166551 kubelet[3284]: I0715 23:13:57.166471 3284 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 23:13:57.255850 kubelet[3284]: I0715 23:13:57.254797 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-30" podStartSLOduration=1.254775306 podStartE2EDuration="1.254775306s" podCreationTimestamp="2025-07-15 23:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:13:57.25407393 +0000 UTC m=+1.319795803" watchObservedRunningTime="2025-07-15 23:13:57.254775306 +0000 UTC m=+1.320497155" Jul 15 23:13:57.305507 kubelet[3284]: E0715 23:13:57.305444 3284 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-19-30\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-19-30" Jul 15 23:13:57.310575 kubelet[3284]: I0715 23:13:57.310448 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-30" podStartSLOduration=1.310377235 podStartE2EDuration="1.310377235s" podCreationTimestamp="2025-07-15 23:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:13:57.308174166 +0000 UTC m=+1.373896039" watchObservedRunningTime="2025-07-15 23:13:57.310377235 +0000 UTC m=+1.376099072" Jul 15 23:13:57.310926 kubelet[3284]: I0715 23:13:57.310853 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-30" podStartSLOduration=3.310811743 podStartE2EDuration="3.310811743s" podCreationTimestamp="2025-07-15 23:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:13:57.272971794 +0000 UTC m=+1.338693643" watchObservedRunningTime="2025-07-15 23:13:57.310811743 +0000 UTC m=+1.376533580" Jul 15 23:13:59.500697 kubelet[3284]: I0715 23:13:59.500629 3284 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:13:59.503608 containerd[1995]: time="2025-07-15T23:13:59.503132385Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:13:59.504810 kubelet[3284]: I0715 23:13:59.504004 3284 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:14:00.401049 systemd[1]: Created slice kubepods-besteffort-pod327255bf_01f8_4837_a1b7_823a1ff20544.slice - libcontainer container kubepods-besteffort-pod327255bf_01f8_4837_a1b7_823a1ff20544.slice. Jul 15 23:14:00.403980 kubelet[3284]: I0715 23:14:00.403922 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs2f4\" (UniqueName: \"kubernetes.io/projected/327255bf-01f8-4837-a1b7-823a1ff20544-kube-api-access-xs2f4\") pod \"kube-proxy-5t2zx\" (UID: \"327255bf-01f8-4837-a1b7-823a1ff20544\") " pod="kube-system/kube-proxy-5t2zx" Jul 15 23:14:00.404110 kubelet[3284]: I0715 23:14:00.403991 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/327255bf-01f8-4837-a1b7-823a1ff20544-xtables-lock\") pod \"kube-proxy-5t2zx\" (UID: \"327255bf-01f8-4837-a1b7-823a1ff20544\") " pod="kube-system/kube-proxy-5t2zx" Jul 15 23:14:00.404110 kubelet[3284]: I0715 23:14:00.404034 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/327255bf-01f8-4837-a1b7-823a1ff20544-lib-modules\") pod \"kube-proxy-5t2zx\" (UID: \"327255bf-01f8-4837-a1b7-823a1ff20544\") " pod="kube-system/kube-proxy-5t2zx" Jul 15 23:14:00.404110 kubelet[3284]: I0715 23:14:00.404077 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/327255bf-01f8-4837-a1b7-823a1ff20544-kube-proxy\") pod \"kube-proxy-5t2zx\" (UID: \"327255bf-01f8-4837-a1b7-823a1ff20544\") " pod="kube-system/kube-proxy-5t2zx" Jul 15 23:14:00.548537 systemd[1]: Created slice kubepods-besteffort-pod3f441fe2_ab9b_4666_ab94_34052535b4f0.slice - libcontainer container kubepods-besteffort-pod3f441fe2_ab9b_4666_ab94_34052535b4f0.slice. Jul 15 23:14:00.608494 kubelet[3284]: I0715 23:14:00.608418 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3f441fe2-ab9b-4666-ab94-34052535b4f0-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-9czms\" (UID: \"3f441fe2-ab9b-4666-ab94-34052535b4f0\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-9czms" Jul 15 23:14:00.609078 kubelet[3284]: I0715 23:14:00.608636 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6vvm\" (UniqueName: \"kubernetes.io/projected/3f441fe2-ab9b-4666-ab94-34052535b4f0-kube-api-access-m6vvm\") pod \"tigera-operator-5bf8dfcb4-9czms\" (UID: \"3f441fe2-ab9b-4666-ab94-34052535b4f0\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-9czms" Jul 15 23:14:00.721578 containerd[1995]: time="2025-07-15T23:14:00.720812987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5t2zx,Uid:327255bf-01f8-4837-a1b7-823a1ff20544,Namespace:kube-system,Attempt:0,}" Jul 15 23:14:00.773686 containerd[1995]: time="2025-07-15T23:14:00.773607804Z" level=info msg="connecting to shim 722f6d1315d5a26f3069c61e816079fab3bd01c65bc29327e8b8fc238bcd6226" address="unix:///run/containerd/s/5ee74d2dbea7f35bb921b9e82ca874563f77651b52800a3a111c6e9d9e4d5da4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:00.828908 systemd[1]: Started cri-containerd-722f6d1315d5a26f3069c61e816079fab3bd01c65bc29327e8b8fc238bcd6226.scope - libcontainer container 722f6d1315d5a26f3069c61e816079fab3bd01c65bc29327e8b8fc238bcd6226. Jul 15 23:14:00.866244 containerd[1995]: time="2025-07-15T23:14:00.865884060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-9czms,Uid:3f441fe2-ab9b-4666-ab94-34052535b4f0,Namespace:tigera-operator,Attempt:0,}" Jul 15 23:14:00.897732 containerd[1995]: time="2025-07-15T23:14:00.897664728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5t2zx,Uid:327255bf-01f8-4837-a1b7-823a1ff20544,Namespace:kube-system,Attempt:0,} returns sandbox id \"722f6d1315d5a26f3069c61e816079fab3bd01c65bc29327e8b8fc238bcd6226\"" Jul 15 23:14:00.910894 containerd[1995]: time="2025-07-15T23:14:00.910825584Z" level=info msg="CreateContainer within sandbox \"722f6d1315d5a26f3069c61e816079fab3bd01c65bc29327e8b8fc238bcd6226\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:14:00.929480 containerd[1995]: time="2025-07-15T23:14:00.928676448Z" level=info msg="connecting to shim fdc75af76b7e9a1b9afd29cea568e6f8ad0cf79c9198d41d2043dada3a702a22" address="unix:///run/containerd/s/a9f790c3ef2eca6647a71deec11d72dfe2c02861916cd0e0be28fa80d7d2ea86" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:00.941986 containerd[1995]: time="2025-07-15T23:14:00.941911753Z" level=info msg="Container 5ce8a5d400115840bcaac5216e82dab7a35a4f0f8f0f0b009eb8c008d9ddbf44: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:00.964212 containerd[1995]: time="2025-07-15T23:14:00.963746233Z" level=info msg="CreateContainer within sandbox \"722f6d1315d5a26f3069c61e816079fab3bd01c65bc29327e8b8fc238bcd6226\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5ce8a5d400115840bcaac5216e82dab7a35a4f0f8f0f0b009eb8c008d9ddbf44\"" Jul 15 23:14:00.966614 containerd[1995]: time="2025-07-15T23:14:00.966310105Z" level=info msg="StartContainer for \"5ce8a5d400115840bcaac5216e82dab7a35a4f0f8f0f0b009eb8c008d9ddbf44\"" Jul 15 23:14:00.978721 containerd[1995]: time="2025-07-15T23:14:00.977957077Z" level=info msg="connecting to shim 5ce8a5d400115840bcaac5216e82dab7a35a4f0f8f0f0b009eb8c008d9ddbf44" address="unix:///run/containerd/s/5ee74d2dbea7f35bb921b9e82ca874563f77651b52800a3a111c6e9d9e4d5da4" protocol=ttrpc version=3 Jul 15 23:14:00.982884 systemd[1]: Started cri-containerd-fdc75af76b7e9a1b9afd29cea568e6f8ad0cf79c9198d41d2043dada3a702a22.scope - libcontainer container fdc75af76b7e9a1b9afd29cea568e6f8ad0cf79c9198d41d2043dada3a702a22. Jul 15 23:14:01.039874 systemd[1]: Started cri-containerd-5ce8a5d400115840bcaac5216e82dab7a35a4f0f8f0f0b009eb8c008d9ddbf44.scope - libcontainer container 5ce8a5d400115840bcaac5216e82dab7a35a4f0f8f0f0b009eb8c008d9ddbf44. Jul 15 23:14:01.090301 containerd[1995]: time="2025-07-15T23:14:01.090227721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-9czms,Uid:3f441fe2-ab9b-4666-ab94-34052535b4f0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fdc75af76b7e9a1b9afd29cea568e6f8ad0cf79c9198d41d2043dada3a702a22\"" Jul 15 23:14:01.102359 containerd[1995]: time="2025-07-15T23:14:01.100182705Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 23:14:01.146637 containerd[1995]: time="2025-07-15T23:14:01.146374570Z" level=info msg="StartContainer for \"5ce8a5d400115840bcaac5216e82dab7a35a4f0f8f0f0b009eb8c008d9ddbf44\" returns successfully" Jul 15 23:14:01.427883 update_engine[1975]: I20250715 23:14:01.427789 1975 update_attempter.cc:509] Updating boot flags... Jul 15 23:14:02.475715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1507520696.mount: Deactivated successfully. Jul 15 23:14:03.372445 containerd[1995]: time="2025-07-15T23:14:03.372390325Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:03.374483 containerd[1995]: time="2025-07-15T23:14:03.374434069Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 15 23:14:03.376809 containerd[1995]: time="2025-07-15T23:14:03.376755853Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:03.382762 containerd[1995]: time="2025-07-15T23:14:03.382653937Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:03.384227 containerd[1995]: time="2025-07-15T23:14:03.384182893Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.283931272s" Jul 15 23:14:03.384388 containerd[1995]: time="2025-07-15T23:14:03.384359917Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 15 23:14:03.393284 containerd[1995]: time="2025-07-15T23:14:03.393220429Z" level=info msg="CreateContainer within sandbox \"fdc75af76b7e9a1b9afd29cea568e6f8ad0cf79c9198d41d2043dada3a702a22\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 23:14:03.421970 containerd[1995]: time="2025-07-15T23:14:03.421911013Z" level=info msg="Container 8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:03.434367 kubelet[3284]: I0715 23:14:03.434270 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5t2zx" podStartSLOduration=3.434245309 podStartE2EDuration="3.434245309s" podCreationTimestamp="2025-07-15 23:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:14:01.332090566 +0000 UTC m=+5.397812451" watchObservedRunningTime="2025-07-15 23:14:03.434245309 +0000 UTC m=+7.499967182" Jul 15 23:14:03.443895 containerd[1995]: time="2025-07-15T23:14:03.443743561Z" level=info msg="CreateContainer within sandbox \"fdc75af76b7e9a1b9afd29cea568e6f8ad0cf79c9198d41d2043dada3a702a22\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52\"" Jul 15 23:14:03.445813 containerd[1995]: time="2025-07-15T23:14:03.445735429Z" level=info msg="StartContainer for \"8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52\"" Jul 15 23:14:03.448026 containerd[1995]: time="2025-07-15T23:14:03.447932341Z" level=info msg="connecting to shim 8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52" address="unix:///run/containerd/s/a9f790c3ef2eca6647a71deec11d72dfe2c02861916cd0e0be28fa80d7d2ea86" protocol=ttrpc version=3 Jul 15 23:14:03.487083 systemd[1]: Started cri-containerd-8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52.scope - libcontainer container 8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52. Jul 15 23:14:03.544602 containerd[1995]: time="2025-07-15T23:14:03.544488865Z" level=info msg="StartContainer for \"8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52\" returns successfully" Jul 15 23:14:07.073584 kubelet[3284]: I0715 23:14:07.073341 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-9czms" podStartSLOduration=4.782895147 podStartE2EDuration="7.073319703s" podCreationTimestamp="2025-07-15 23:14:00 +0000 UTC" firstStartedPulling="2025-07-15 23:14:01.095426745 +0000 UTC m=+5.161148582" lastFinishedPulling="2025-07-15 23:14:03.385851301 +0000 UTC m=+7.451573138" observedRunningTime="2025-07-15 23:14:04.35971145 +0000 UTC m=+8.425433323" watchObservedRunningTime="2025-07-15 23:14:07.073319703 +0000 UTC m=+11.139041540" Jul 15 23:14:12.676745 sudo[2341]: pam_unix(sudo:session): session closed for user root Jul 15 23:14:12.705301 sshd[2340]: Connection closed by 139.178.89.65 port 56096 Jul 15 23:14:12.706141 sshd-session[2338]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:12.717896 systemd[1]: sshd@6-172.31.19.30:22-139.178.89.65:56096.service: Deactivated successfully. Jul 15 23:14:12.723992 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:14:12.724582 systemd[1]: session-7.scope: Consumed 11.067s CPU time, 226.7M memory peak. Jul 15 23:14:12.731195 systemd-logind[1974]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:14:12.734388 systemd-logind[1974]: Removed session 7. Jul 15 23:14:23.812446 systemd[1]: Created slice kubepods-besteffort-pod577fa492_fa89_4bd7_ad45_9e4e64e285bf.slice - libcontainer container kubepods-besteffort-pod577fa492_fa89_4bd7_ad45_9e4e64e285bf.slice. Jul 15 23:14:23.865706 kubelet[3284]: I0715 23:14:23.864211 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/577fa492-fa89-4bd7-ad45-9e4e64e285bf-typha-certs\") pod \"calico-typha-7466995f9f-7cgsm\" (UID: \"577fa492-fa89-4bd7-ad45-9e4e64e285bf\") " pod="calico-system/calico-typha-7466995f9f-7cgsm" Jul 15 23:14:23.865706 kubelet[3284]: I0715 23:14:23.864287 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/577fa492-fa89-4bd7-ad45-9e4e64e285bf-tigera-ca-bundle\") pod \"calico-typha-7466995f9f-7cgsm\" (UID: \"577fa492-fa89-4bd7-ad45-9e4e64e285bf\") " pod="calico-system/calico-typha-7466995f9f-7cgsm" Jul 15 23:14:23.865706 kubelet[3284]: I0715 23:14:23.864333 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj8cg\" (UniqueName: \"kubernetes.io/projected/577fa492-fa89-4bd7-ad45-9e4e64e285bf-kube-api-access-zj8cg\") pod \"calico-typha-7466995f9f-7cgsm\" (UID: \"577fa492-fa89-4bd7-ad45-9e4e64e285bf\") " pod="calico-system/calico-typha-7466995f9f-7cgsm" Jul 15 23:14:24.084210 systemd[1]: Created slice kubepods-besteffort-pod6b780e9b_fe9e_4e28_ad67_b0d1943c80e2.slice - libcontainer container kubepods-besteffort-pod6b780e9b_fe9e_4e28_ad67_b0d1943c80e2.slice. Jul 15 23:14:24.133401 containerd[1995]: time="2025-07-15T23:14:24.133346060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7466995f9f-7cgsm,Uid:577fa492-fa89-4bd7-ad45-9e4e64e285bf,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:24.171603 kubelet[3284]: I0715 23:14:24.166832 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b780e9b-fe9e-4e28-ad67-b0d1943c80e2-tigera-ca-bundle\") pod \"calico-node-dmfmg\" (UID: \"6b780e9b-fe9e-4e28-ad67-b0d1943c80e2\") " pod="calico-system/calico-node-dmfmg" Jul 15 23:14:24.171603 kubelet[3284]: I0715 23:14:24.166897 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6b780e9b-fe9e-4e28-ad67-b0d1943c80e2-cni-bin-dir\") pod \"calico-node-dmfmg\" (UID: \"6b780e9b-fe9e-4e28-ad67-b0d1943c80e2\") " pod="calico-system/calico-node-dmfmg" Jul 15 23:14:24.171603 kubelet[3284]: I0715 23:14:24.166933 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6b780e9b-fe9e-4e28-ad67-b0d1943c80e2-cni-log-dir\") pod \"calico-node-dmfmg\" (UID: \"6b780e9b-fe9e-4e28-ad67-b0d1943c80e2\") " pod="calico-system/calico-node-dmfmg" Jul 15 23:14:24.171603 kubelet[3284]: I0715 23:14:24.166968 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6b780e9b-fe9e-4e28-ad67-b0d1943c80e2-policysync\") pod \"calico-node-dmfmg\" (UID: \"6b780e9b-fe9e-4e28-ad67-b0d1943c80e2\") " pod="calico-system/calico-node-dmfmg" Jul 15 23:14:24.171603 kubelet[3284]: I0715 23:14:24.167002 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6b780e9b-fe9e-4e28-ad67-b0d1943c80e2-var-run-calico\") pod \"calico-node-dmfmg\" (UID: \"6b780e9b-fe9e-4e28-ad67-b0d1943c80e2\") " pod="calico-system/calico-node-dmfmg" Jul 15 23:14:24.171967 kubelet[3284]: I0715 23:14:24.167034 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b780e9b-fe9e-4e28-ad67-b0d1943c80e2-xtables-lock\") pod \"calico-node-dmfmg\" (UID: \"6b780e9b-fe9e-4e28-ad67-b0d1943c80e2\") " pod="calico-system/calico-node-dmfmg" Jul 15 23:14:24.171967 kubelet[3284]: I0715 23:14:24.167075 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b780e9b-fe9e-4e28-ad67-b0d1943c80e2-lib-modules\") pod \"calico-node-dmfmg\" (UID: \"6b780e9b-fe9e-4e28-ad67-b0d1943c80e2\") " pod="calico-system/calico-node-dmfmg" Jul 15 23:14:24.171967 kubelet[3284]: I0715 23:14:24.167143 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6b780e9b-fe9e-4e28-ad67-b0d1943c80e2-var-lib-calico\") pod \"calico-node-dmfmg\" (UID: \"6b780e9b-fe9e-4e28-ad67-b0d1943c80e2\") " pod="calico-system/calico-node-dmfmg" Jul 15 23:14:24.171967 kubelet[3284]: I0715 23:14:24.167180 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz8dc\" (UniqueName: \"kubernetes.io/projected/6b780e9b-fe9e-4e28-ad67-b0d1943c80e2-kube-api-access-tz8dc\") pod \"calico-node-dmfmg\" (UID: \"6b780e9b-fe9e-4e28-ad67-b0d1943c80e2\") " pod="calico-system/calico-node-dmfmg" Jul 15 23:14:24.171967 kubelet[3284]: I0715 23:14:24.167226 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6b780e9b-fe9e-4e28-ad67-b0d1943c80e2-flexvol-driver-host\") pod \"calico-node-dmfmg\" (UID: \"6b780e9b-fe9e-4e28-ad67-b0d1943c80e2\") " pod="calico-system/calico-node-dmfmg" Jul 15 23:14:24.172210 kubelet[3284]: I0715 23:14:24.167264 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6b780e9b-fe9e-4e28-ad67-b0d1943c80e2-node-certs\") pod \"calico-node-dmfmg\" (UID: \"6b780e9b-fe9e-4e28-ad67-b0d1943c80e2\") " pod="calico-system/calico-node-dmfmg" Jul 15 23:14:24.172210 kubelet[3284]: I0715 23:14:24.167303 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6b780e9b-fe9e-4e28-ad67-b0d1943c80e2-cni-net-dir\") pod \"calico-node-dmfmg\" (UID: \"6b780e9b-fe9e-4e28-ad67-b0d1943c80e2\") " pod="calico-system/calico-node-dmfmg" Jul 15 23:14:24.205726 containerd[1995]: time="2025-07-15T23:14:24.205664864Z" level=info msg="connecting to shim 32412ee7035892b32b67ab81b0fcc5e6b55b563576f951f9baea8e5ad16e5cd6" address="unix:///run/containerd/s/6508402977e2d18eb25e414355ee8fa7fd1ae0e94056ad05c75c526be7da785e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:24.288500 kubelet[3284]: E0715 23:14:24.288461 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.288834 kubelet[3284]: W0715 23:14:24.288666 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.289295 kubelet[3284]: E0715 23:14:24.289215 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.291218 systemd[1]: Started cri-containerd-32412ee7035892b32b67ab81b0fcc5e6b55b563576f951f9baea8e5ad16e5cd6.scope - libcontainer container 32412ee7035892b32b67ab81b0fcc5e6b55b563576f951f9baea8e5ad16e5cd6. Jul 15 23:14:24.303482 kubelet[3284]: E0715 23:14:24.302509 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.306141 kubelet[3284]: W0715 23:14:24.306074 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.306293 kubelet[3284]: E0715 23:14:24.306147 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.327937 kubelet[3284]: E0715 23:14:24.327856 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.327937 kubelet[3284]: W0715 23:14:24.327896 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.328276 kubelet[3284]: E0715 23:14:24.328182 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.399227 containerd[1995]: time="2025-07-15T23:14:24.399058161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dmfmg,Uid:6b780e9b-fe9e-4e28-ad67-b0d1943c80e2,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:24.459869 containerd[1995]: time="2025-07-15T23:14:24.459308517Z" level=info msg="connecting to shim 1b65e81b8bad1077cb4158c47ff8316ed17615376f5650a1eec4f54965f42356" address="unix:///run/containerd/s/859fb013858bb8382ad9c0b6020b93cdbd969d550a5b5807064bc0711942ffc0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:24.546926 systemd[1]: Started cri-containerd-1b65e81b8bad1077cb4158c47ff8316ed17615376f5650a1eec4f54965f42356.scope - libcontainer container 1b65e81b8bad1077cb4158c47ff8316ed17615376f5650a1eec4f54965f42356. Jul 15 23:14:24.561201 kubelet[3284]: E0715 23:14:24.561072 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j48z9" podUID="fe57ee22-2543-410d-ab8c-77d8463dc034" Jul 15 23:14:24.660249 kubelet[3284]: E0715 23:14:24.659754 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.660249 kubelet[3284]: W0715 23:14:24.659793 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.660249 kubelet[3284]: E0715 23:14:24.659827 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.660901 kubelet[3284]: E0715 23:14:24.660693 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.660901 kubelet[3284]: W0715 23:14:24.660721 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.660901 kubelet[3284]: E0715 23:14:24.660750 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.661596 kubelet[3284]: E0715 23:14:24.661530 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.661596 kubelet[3284]: W0715 23:14:24.661584 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.661800 kubelet[3284]: E0715 23:14:24.661620 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.662802 kubelet[3284]: E0715 23:14:24.662752 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.662802 kubelet[3284]: W0715 23:14:24.662792 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.663243 kubelet[3284]: E0715 23:14:24.662827 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.663895 kubelet[3284]: E0715 23:14:24.663844 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.663895 kubelet[3284]: W0715 23:14:24.663884 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.664159 kubelet[3284]: E0715 23:14:24.663916 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.665226 kubelet[3284]: E0715 23:14:24.665178 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.665226 kubelet[3284]: W0715 23:14:24.665221 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.665732 kubelet[3284]: E0715 23:14:24.665254 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.666077 kubelet[3284]: E0715 23:14:24.666037 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.666077 kubelet[3284]: W0715 23:14:24.666071 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.666383 kubelet[3284]: E0715 23:14:24.666101 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.667162 kubelet[3284]: E0715 23:14:24.667106 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.667162 kubelet[3284]: W0715 23:14:24.667145 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.668702 kubelet[3284]: E0715 23:14:24.668622 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.669113 kubelet[3284]: E0715 23:14:24.669076 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.669113 kubelet[3284]: W0715 23:14:24.669107 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.669365 kubelet[3284]: E0715 23:14:24.669135 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.669635 kubelet[3284]: E0715 23:14:24.669587 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.669635 kubelet[3284]: W0715 23:14:24.669618 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.669800 kubelet[3284]: E0715 23:14:24.669647 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.669955 kubelet[3284]: E0715 23:14:24.669912 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.669955 kubelet[3284]: W0715 23:14:24.669939 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.670220 kubelet[3284]: E0715 23:14:24.669961 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.670527 kubelet[3284]: E0715 23:14:24.670489 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.670527 kubelet[3284]: W0715 23:14:24.670521 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.670705 kubelet[3284]: E0715 23:14:24.670549 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.672106 kubelet[3284]: E0715 23:14:24.672058 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.672106 kubelet[3284]: W0715 23:14:24.672093 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.672331 kubelet[3284]: E0715 23:14:24.672125 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.672908 kubelet[3284]: E0715 23:14:24.672863 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.672908 kubelet[3284]: W0715 23:14:24.672899 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.673232 kubelet[3284]: E0715 23:14:24.672933 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.673488 kubelet[3284]: E0715 23:14:24.673450 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.673488 kubelet[3284]: W0715 23:14:24.673483 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.673708 kubelet[3284]: E0715 23:14:24.673512 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.674323 kubelet[3284]: E0715 23:14:24.674272 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.674323 kubelet[3284]: W0715 23:14:24.674312 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.674666 kubelet[3284]: E0715 23:14:24.674345 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.675970 kubelet[3284]: E0715 23:14:24.675919 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.675970 kubelet[3284]: W0715 23:14:24.675962 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.676661 kubelet[3284]: E0715 23:14:24.675996 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.677544 kubelet[3284]: E0715 23:14:24.677495 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.677544 kubelet[3284]: W0715 23:14:24.677533 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.678116 kubelet[3284]: E0715 23:14:24.677639 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.679723 kubelet[3284]: E0715 23:14:24.679063 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.679723 kubelet[3284]: W0715 23:14:24.679099 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.679723 kubelet[3284]: E0715 23:14:24.679132 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.680794 kubelet[3284]: E0715 23:14:24.680726 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.681300 kubelet[3284]: W0715 23:14:24.681060 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.681300 kubelet[3284]: E0715 23:14:24.681103 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.684373 kubelet[3284]: E0715 23:14:24.683148 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.684373 kubelet[3284]: W0715 23:14:24.683184 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.684373 kubelet[3284]: E0715 23:14:24.683216 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.684373 kubelet[3284]: I0715 23:14:24.683279 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe57ee22-2543-410d-ab8c-77d8463dc034-registration-dir\") pod \"csi-node-driver-j48z9\" (UID: \"fe57ee22-2543-410d-ab8c-77d8463dc034\") " pod="calico-system/csi-node-driver-j48z9" Jul 15 23:14:24.684373 kubelet[3284]: E0715 23:14:24.683676 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.684373 kubelet[3284]: W0715 23:14:24.683701 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.684373 kubelet[3284]: E0715 23:14:24.683727 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.684373 kubelet[3284]: I0715 23:14:24.683763 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe57ee22-2543-410d-ab8c-77d8463dc034-socket-dir\") pod \"csi-node-driver-j48z9\" (UID: \"fe57ee22-2543-410d-ab8c-77d8463dc034\") " pod="calico-system/csi-node-driver-j48z9" Jul 15 23:14:24.684373 kubelet[3284]: E0715 23:14:24.684045 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.684982 kubelet[3284]: W0715 23:14:24.684064 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.684982 kubelet[3284]: E0715 23:14:24.684084 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.684982 kubelet[3284]: I0715 23:14:24.684116 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw7sd\" (UniqueName: \"kubernetes.io/projected/fe57ee22-2543-410d-ab8c-77d8463dc034-kube-api-access-jw7sd\") pod \"csi-node-driver-j48z9\" (UID: \"fe57ee22-2543-410d-ab8c-77d8463dc034\") " pod="calico-system/csi-node-driver-j48z9" Jul 15 23:14:24.684982 kubelet[3284]: E0715 23:14:24.684412 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.684982 kubelet[3284]: W0715 23:14:24.684431 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.684982 kubelet[3284]: E0715 23:14:24.684455 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.684982 kubelet[3284]: I0715 23:14:24.684489 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe57ee22-2543-410d-ab8c-77d8463dc034-kubelet-dir\") pod \"csi-node-driver-j48z9\" (UID: \"fe57ee22-2543-410d-ab8c-77d8463dc034\") " pod="calico-system/csi-node-driver-j48z9" Jul 15 23:14:24.685778 kubelet[3284]: E0715 23:14:24.685726 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.685778 kubelet[3284]: W0715 23:14:24.685766 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.686135 kubelet[3284]: E0715 23:14:24.685815 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.686135 kubelet[3284]: I0715 23:14:24.685860 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fe57ee22-2543-410d-ab8c-77d8463dc034-varrun\") pod \"csi-node-driver-j48z9\" (UID: \"fe57ee22-2543-410d-ab8c-77d8463dc034\") " pod="calico-system/csi-node-driver-j48z9" Jul 15 23:14:24.689806 kubelet[3284]: E0715 23:14:24.689291 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.690285 kubelet[3284]: W0715 23:14:24.690059 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.690285 kubelet[3284]: E0715 23:14:24.690150 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.692958 kubelet[3284]: E0715 23:14:24.692755 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.692958 kubelet[3284]: W0715 23:14:24.692818 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.693173 kubelet[3284]: E0715 23:14:24.692980 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.694675 kubelet[3284]: E0715 23:14:24.694639 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.695887 kubelet[3284]: W0715 23:14:24.695699 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.695887 kubelet[3284]: E0715 23:14:24.695786 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.696752 kubelet[3284]: E0715 23:14:24.696718 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.697031 kubelet[3284]: W0715 23:14:24.696910 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.697031 kubelet[3284]: E0715 23:14:24.696996 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.698072 kubelet[3284]: E0715 23:14:24.698037 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.699353 kubelet[3284]: W0715 23:14:24.698288 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.699353 kubelet[3284]: E0715 23:14:24.698366 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.699771 kubelet[3284]: E0715 23:14:24.699742 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.699893 kubelet[3284]: W0715 23:14:24.699867 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.700054 kubelet[3284]: E0715 23:14:24.700014 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.700587 kubelet[3284]: E0715 23:14:24.700514 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.701699 kubelet[3284]: W0715 23:14:24.701647 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.702068 kubelet[3284]: E0715 23:14:24.701909 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.702969 kubelet[3284]: E0715 23:14:24.702754 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.702969 kubelet[3284]: W0715 23:14:24.702804 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.702969 kubelet[3284]: E0715 23:14:24.702835 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.704754 kubelet[3284]: E0715 23:14:24.704716 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.705606 kubelet[3284]: W0715 23:14:24.705055 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.705606 kubelet[3284]: E0715 23:14:24.705100 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.708767 kubelet[3284]: E0715 23:14:24.708725 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.708998 kubelet[3284]: W0715 23:14:24.708913 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.708998 kubelet[3284]: E0715 23:14:24.708951 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.786757 kubelet[3284]: E0715 23:14:24.786671 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.786757 kubelet[3284]: W0715 23:14:24.786705 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.787192 kubelet[3284]: E0715 23:14:24.787004 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.787749 kubelet[3284]: E0715 23:14:24.787724 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.788135 kubelet[3284]: W0715 23:14:24.787922 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.788135 kubelet[3284]: E0715 23:14:24.787970 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.788813 kubelet[3284]: E0715 23:14:24.788741 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.789779 kubelet[3284]: W0715 23:14:24.789642 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.789779 kubelet[3284]: E0715 23:14:24.789767 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.790797 kubelet[3284]: E0715 23:14:24.790543 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.790797 kubelet[3284]: W0715 23:14:24.790595 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.790797 kubelet[3284]: E0715 23:14:24.790676 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.791262 kubelet[3284]: E0715 23:14:24.791204 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.791578 kubelet[3284]: W0715 23:14:24.791233 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.791670 kubelet[3284]: E0715 23:14:24.791593 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.793436 kubelet[3284]: E0715 23:14:24.793362 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.793436 kubelet[3284]: W0715 23:14:24.793398 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.793778 kubelet[3284]: E0715 23:14:24.793702 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.794327 kubelet[3284]: E0715 23:14:24.794265 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.794629 kubelet[3284]: W0715 23:14:24.794297 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.794629 kubelet[3284]: E0715 23:14:24.794570 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.795327 kubelet[3284]: E0715 23:14:24.795276 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.795696 kubelet[3284]: W0715 23:14:24.795603 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.795696 kubelet[3284]: E0715 23:14:24.795691 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.797687 kubelet[3284]: E0715 23:14:24.797645 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.797966 kubelet[3284]: W0715 23:14:24.797851 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.797966 kubelet[3284]: E0715 23:14:24.797941 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.798581 kubelet[3284]: E0715 23:14:24.798489 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.798581 kubelet[3284]: W0715 23:14:24.798517 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.798804 kubelet[3284]: E0715 23:14:24.798624 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.799384 kubelet[3284]: E0715 23:14:24.799345 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.799549 kubelet[3284]: W0715 23:14:24.799516 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.799793 kubelet[3284]: E0715 23:14:24.799729 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.801596 kubelet[3284]: E0715 23:14:24.800245 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.801596 kubelet[3284]: W0715 23:14:24.800279 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.801596 kubelet[3284]: E0715 23:14:24.800353 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.802762 kubelet[3284]: E0715 23:14:24.802725 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.803087 kubelet[3284]: W0715 23:14:24.802881 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.803087 kubelet[3284]: E0715 23:14:24.802959 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.803865 kubelet[3284]: E0715 23:14:24.803731 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.803865 kubelet[3284]: W0715 23:14:24.803761 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.803865 kubelet[3284]: E0715 23:14:24.803824 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.804542 kubelet[3284]: E0715 23:14:24.804414 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.804542 kubelet[3284]: W0715 23:14:24.804442 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.804542 kubelet[3284]: E0715 23:14:24.804508 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.805519 kubelet[3284]: E0715 23:14:24.805376 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.805519 kubelet[3284]: W0715 23:14:24.805409 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.805519 kubelet[3284]: E0715 23:14:24.805475 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.807593 kubelet[3284]: E0715 23:14:24.806410 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.807593 kubelet[3284]: W0715 23:14:24.806453 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.807593 kubelet[3284]: E0715 23:14:24.806582 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.808505 kubelet[3284]: E0715 23:14:24.808331 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.808505 kubelet[3284]: W0715 23:14:24.808364 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.808728 kubelet[3284]: E0715 23:14:24.808603 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.809437 kubelet[3284]: E0715 23:14:24.809405 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.809806 kubelet[3284]: W0715 23:14:24.809587 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.809806 kubelet[3284]: E0715 23:14:24.809665 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.810084 kubelet[3284]: E0715 23:14:24.810061 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.810218 kubelet[3284]: W0715 23:14:24.810193 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.810580 kubelet[3284]: E0715 23:14:24.810464 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.812222 kubelet[3284]: E0715 23:14:24.812024 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.812222 kubelet[3284]: W0715 23:14:24.812065 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.812222 kubelet[3284]: E0715 23:14:24.812153 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.813814 kubelet[3284]: E0715 23:14:24.813771 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.814230 kubelet[3284]: W0715 23:14:24.813994 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.814230 kubelet[3284]: E0715 23:14:24.814081 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.814637 kubelet[3284]: E0715 23:14:24.814610 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.814749 kubelet[3284]: W0715 23:14:24.814726 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.815749 kubelet[3284]: E0715 23:14:24.815691 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.816441 kubelet[3284]: E0715 23:14:24.816412 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.816599 kubelet[3284]: W0715 23:14:24.816574 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.816900 kubelet[3284]: E0715 23:14:24.816747 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.817767 kubelet[3284]: E0715 23:14:24.817733 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.817990 kubelet[3284]: W0715 23:14:24.817893 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.817990 kubelet[3284]: E0715 23:14:24.817939 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.866663 kubelet[3284]: E0715 23:14:24.865489 3284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:14:24.866663 kubelet[3284]: W0715 23:14:24.866452 3284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:14:24.866663 kubelet[3284]: E0715 23:14:24.866527 3284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:14:24.901854 containerd[1995]: time="2025-07-15T23:14:24.901748340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dmfmg,Uid:6b780e9b-fe9e-4e28-ad67-b0d1943c80e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b65e81b8bad1077cb4158c47ff8316ed17615376f5650a1eec4f54965f42356\"" Jul 15 23:14:24.910672 containerd[1995]: time="2025-07-15T23:14:24.908883444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 23:14:25.152462 containerd[1995]: time="2025-07-15T23:14:25.152361933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7466995f9f-7cgsm,Uid:577fa492-fa89-4bd7-ad45-9e4e64e285bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"32412ee7035892b32b67ab81b0fcc5e6b55b563576f951f9baea8e5ad16e5cd6\"" Jul 15 23:14:26.151319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1092124779.mount: Deactivated successfully. Jul 15 23:14:26.177036 kubelet[3284]: E0715 23:14:26.176915 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j48z9" podUID="fe57ee22-2543-410d-ab8c-77d8463dc034" Jul 15 23:14:26.318626 containerd[1995]: time="2025-07-15T23:14:26.317910935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:26.320198 containerd[1995]: time="2025-07-15T23:14:26.320139023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Jul 15 23:14:26.322651 containerd[1995]: time="2025-07-15T23:14:26.322592411Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:26.330584 containerd[1995]: time="2025-07-15T23:14:26.330495383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:26.331704 containerd[1995]: time="2025-07-15T23:14:26.331397927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.422442243s" Jul 15 23:14:26.331704 containerd[1995]: time="2025-07-15T23:14:26.331462139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 15 23:14:26.334693 containerd[1995]: time="2025-07-15T23:14:26.334173791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 23:14:26.335849 containerd[1995]: time="2025-07-15T23:14:26.335746823Z" level=info msg="CreateContainer within sandbox \"1b65e81b8bad1077cb4158c47ff8316ed17615376f5650a1eec4f54965f42356\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 23:14:26.359796 containerd[1995]: time="2025-07-15T23:14:26.359719451Z" level=info msg="Container 89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:26.383260 containerd[1995]: time="2025-07-15T23:14:26.382994243Z" level=info msg="CreateContainer within sandbox \"1b65e81b8bad1077cb4158c47ff8316ed17615376f5650a1eec4f54965f42356\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03\"" Jul 15 23:14:26.385383 containerd[1995]: time="2025-07-15T23:14:26.384842519Z" level=info msg="StartContainer for \"89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03\"" Jul 15 23:14:26.391116 containerd[1995]: time="2025-07-15T23:14:26.391039475Z" level=info msg="connecting to shim 89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03" address="unix:///run/containerd/s/859fb013858bb8382ad9c0b6020b93cdbd969d550a5b5807064bc0711942ffc0" protocol=ttrpc version=3 Jul 15 23:14:26.444141 systemd[1]: Started cri-containerd-89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03.scope - libcontainer container 89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03. Jul 15 23:14:26.547800 containerd[1995]: time="2025-07-15T23:14:26.546927672Z" level=info msg="StartContainer for \"89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03\" returns successfully" Jul 15 23:14:26.588870 systemd[1]: cri-containerd-89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03.scope: Deactivated successfully. Jul 15 23:14:26.597975 containerd[1995]: time="2025-07-15T23:14:26.597890664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03\" id:\"89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03\" pid:4056 exited_at:{seconds:1752621266 nanos:595493892}" Jul 15 23:14:26.597975 containerd[1995]: time="2025-07-15T23:14:26.597887532Z" level=info msg="received exit event container_id:\"89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03\" id:\"89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03\" pid:4056 exited_at:{seconds:1752621266 nanos:595493892}" Jul 15 23:14:27.078147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89c52b24337ecba6665acf2626d16bfbb35035e7f3769da0df6b3cb9ab01ee03-rootfs.mount: Deactivated successfully. Jul 15 23:14:28.176439 kubelet[3284]: E0715 23:14:28.175412 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j48z9" podUID="fe57ee22-2543-410d-ab8c-77d8463dc034" Jul 15 23:14:28.419238 containerd[1995]: time="2025-07-15T23:14:28.419184721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:28.421295 containerd[1995]: time="2025-07-15T23:14:28.421249177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=31717828" Jul 15 23:14:28.423372 containerd[1995]: time="2025-07-15T23:14:28.423329161Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:28.429074 containerd[1995]: time="2025-07-15T23:14:28.428934541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:28.431708 containerd[1995]: time="2025-07-15T23:14:28.430480441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.09625277s" Jul 15 23:14:28.432416 containerd[1995]: time="2025-07-15T23:14:28.432378997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 15 23:14:28.435768 containerd[1995]: time="2025-07-15T23:14:28.435705253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 23:14:28.472188 containerd[1995]: time="2025-07-15T23:14:28.472111813Z" level=info msg="CreateContainer within sandbox \"32412ee7035892b32b67ab81b0fcc5e6b55b563576f951f9baea8e5ad16e5cd6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 23:14:28.492005 containerd[1995]: time="2025-07-15T23:14:28.491908849Z" level=info msg="Container a2d4d0e270e85e5bb30e90036ea7bdcf190864c090f06c10d0e0115af1a0210a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:28.499833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount8253903.mount: Deactivated successfully. Jul 15 23:14:28.522177 containerd[1995]: time="2025-07-15T23:14:28.522079922Z" level=info msg="CreateContainer within sandbox \"32412ee7035892b32b67ab81b0fcc5e6b55b563576f951f9baea8e5ad16e5cd6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a2d4d0e270e85e5bb30e90036ea7bdcf190864c090f06c10d0e0115af1a0210a\"" Jul 15 23:14:28.523761 containerd[1995]: time="2025-07-15T23:14:28.523634534Z" level=info msg="StartContainer for \"a2d4d0e270e85e5bb30e90036ea7bdcf190864c090f06c10d0e0115af1a0210a\"" Jul 15 23:14:28.528626 containerd[1995]: time="2025-07-15T23:14:28.528513446Z" level=info msg="connecting to shim a2d4d0e270e85e5bb30e90036ea7bdcf190864c090f06c10d0e0115af1a0210a" address="unix:///run/containerd/s/6508402977e2d18eb25e414355ee8fa7fd1ae0e94056ad05c75c526be7da785e" protocol=ttrpc version=3 Jul 15 23:14:28.567891 systemd[1]: Started cri-containerd-a2d4d0e270e85e5bb30e90036ea7bdcf190864c090f06c10d0e0115af1a0210a.scope - libcontainer container a2d4d0e270e85e5bb30e90036ea7bdcf190864c090f06c10d0e0115af1a0210a. Jul 15 23:14:28.664962 containerd[1995]: time="2025-07-15T23:14:28.664853942Z" level=info msg="StartContainer for \"a2d4d0e270e85e5bb30e90036ea7bdcf190864c090f06c10d0e0115af1a0210a\" returns successfully" Jul 15 23:14:29.461103 kubelet[3284]: I0715 23:14:29.460999 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7466995f9f-7cgsm" podStartSLOduration=3.185773702 podStartE2EDuration="6.460978718s" podCreationTimestamp="2025-07-15 23:14:23 +0000 UTC" firstStartedPulling="2025-07-15 23:14:25.158671905 +0000 UTC m=+29.224393730" lastFinishedPulling="2025-07-15 23:14:28.433876849 +0000 UTC m=+32.499598746" observedRunningTime="2025-07-15 23:14:29.460233134 +0000 UTC m=+33.525954995" watchObservedRunningTime="2025-07-15 23:14:29.460978718 +0000 UTC m=+33.526700555" Jul 15 23:14:30.181507 kubelet[3284]: E0715 23:14:30.181447 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j48z9" podUID="fe57ee22-2543-410d-ab8c-77d8463dc034" Jul 15 23:14:31.427628 containerd[1995]: time="2025-07-15T23:14:31.427543540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:31.430847 containerd[1995]: time="2025-07-15T23:14:31.430768060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 15 23:14:31.433461 containerd[1995]: time="2025-07-15T23:14:31.433376164Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:31.439742 containerd[1995]: time="2025-07-15T23:14:31.439662160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:31.441501 containerd[1995]: time="2025-07-15T23:14:31.441314344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.005515191s" Jul 15 23:14:31.441501 containerd[1995]: time="2025-07-15T23:14:31.441370864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 15 23:14:31.447205 containerd[1995]: time="2025-07-15T23:14:31.446021572Z" level=info msg="CreateContainer within sandbox \"1b65e81b8bad1077cb4158c47ff8316ed17615376f5650a1eec4f54965f42356\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 23:14:31.477588 containerd[1995]: time="2025-07-15T23:14:31.475212928Z" level=info msg="Container 55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:31.504058 containerd[1995]: time="2025-07-15T23:14:31.503988484Z" level=info msg="CreateContainer within sandbox \"1b65e81b8bad1077cb4158c47ff8316ed17615376f5650a1eec4f54965f42356\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94\"" Jul 15 23:14:31.511105 containerd[1995]: time="2025-07-15T23:14:31.510773272Z" level=info msg="StartContainer for \"55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94\"" Jul 15 23:14:31.516760 containerd[1995]: time="2025-07-15T23:14:31.516290392Z" level=info msg="connecting to shim 55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94" address="unix:///run/containerd/s/859fb013858bb8382ad9c0b6020b93cdbd969d550a5b5807064bc0711942ffc0" protocol=ttrpc version=3 Jul 15 23:14:31.561884 systemd[1]: Started cri-containerd-55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94.scope - libcontainer container 55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94. Jul 15 23:14:31.651680 containerd[1995]: time="2025-07-15T23:14:31.651609581Z" level=info msg="StartContainer for \"55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94\" returns successfully" Jul 15 23:14:32.175616 kubelet[3284]: E0715 23:14:32.174814 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j48z9" podUID="fe57ee22-2543-410d-ab8c-77d8463dc034" Jul 15 23:14:32.857531 containerd[1995]: time="2025-07-15T23:14:32.857461255Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:14:32.863304 systemd[1]: cri-containerd-55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94.scope: Deactivated successfully. Jul 15 23:14:32.864332 systemd[1]: cri-containerd-55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94.scope: Consumed 928ms CPU time, 186.3M memory peak, 165.8M written to disk. Jul 15 23:14:32.867915 containerd[1995]: time="2025-07-15T23:14:32.867762667Z" level=info msg="received exit event container_id:\"55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94\" id:\"55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94\" pid:4163 exited_at:{seconds:1752621272 nanos:867361447}" Jul 15 23:14:32.868281 containerd[1995]: time="2025-07-15T23:14:32.868236271Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94\" id:\"55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94\" pid:4163 exited_at:{seconds:1752621272 nanos:867361447}" Jul 15 23:14:32.908297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55418242ee53414ef3850863df6c1ae3a763eb33a677f33ef96f6e629434bf94-rootfs.mount: Deactivated successfully. Jul 15 23:14:32.955595 kubelet[3284]: I0715 23:14:32.955346 3284 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 23:14:33.030624 systemd[1]: Created slice kubepods-burstable-pod6ea3fe6c_3568_40fc_bb4d_6e2955010c16.slice - libcontainer container kubepods-burstable-pod6ea3fe6c_3568_40fc_bb4d_6e2955010c16.slice. Jul 15 23:14:33.061585 kubelet[3284]: I0715 23:14:33.061222 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56nwc\" (UniqueName: \"kubernetes.io/projected/375a0692-ef8f-4068-a807-b03338e74674-kube-api-access-56nwc\") pod \"coredns-7c65d6cfc9-brdvw\" (UID: \"375a0692-ef8f-4068-a807-b03338e74674\") " pod="kube-system/coredns-7c65d6cfc9-brdvw" Jul 15 23:14:33.064751 kubelet[3284]: I0715 23:14:33.063511 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6d9a04-36e2-4d83-83da-b77b4980d2aa-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-dn8dz\" (UID: \"5c6d9a04-36e2-4d83-83da-b77b4980d2aa\") " pod="calico-system/goldmane-58fd7646b9-dn8dz" Jul 15 23:14:33.064751 kubelet[3284]: I0715 23:14:33.064724 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5c6d9a04-36e2-4d83-83da-b77b4980d2aa-goldmane-key-pair\") pod \"goldmane-58fd7646b9-dn8dz\" (UID: \"5c6d9a04-36e2-4d83-83da-b77b4980d2aa\") " pod="calico-system/goldmane-58fd7646b9-dn8dz" Jul 15 23:14:33.064385 systemd[1]: Created slice kubepods-burstable-pod375a0692_ef8f_4068_a807_b03338e74674.slice - libcontainer container kubepods-burstable-pod375a0692_ef8f_4068_a807_b03338e74674.slice. Jul 15 23:14:33.065076 kubelet[3284]: I0715 23:14:33.064813 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmsqw\" (UniqueName: \"kubernetes.io/projected/0ba49da8-b5ae-49f6-a962-a399f396556d-kube-api-access-pmsqw\") pod \"calico-apiserver-79c9578dc8-48pmd\" (UID: \"0ba49da8-b5ae-49f6-a962-a399f396556d\") " pod="calico-apiserver/calico-apiserver-79c9578dc8-48pmd" Jul 15 23:14:33.065076 kubelet[3284]: I0715 23:14:33.064867 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c6d9a04-36e2-4d83-83da-b77b4980d2aa-config\") pod \"goldmane-58fd7646b9-dn8dz\" (UID: \"5c6d9a04-36e2-4d83-83da-b77b4980d2aa\") " pod="calico-system/goldmane-58fd7646b9-dn8dz" Jul 15 23:14:33.065076 kubelet[3284]: I0715 23:14:33.064935 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b34ce3a0-ef81-40df-83fb-eff1fce094a1-tigera-ca-bundle\") pod \"calico-kube-controllers-544bb4cb54-bn5qk\" (UID: \"b34ce3a0-ef81-40df-83fb-eff1fce094a1\") " pod="calico-system/calico-kube-controllers-544bb4cb54-bn5qk" Jul 15 23:14:33.065076 kubelet[3284]: I0715 23:14:33.065002 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw9l7\" (UniqueName: \"kubernetes.io/projected/b34ce3a0-ef81-40df-83fb-eff1fce094a1-kube-api-access-vw9l7\") pod \"calico-kube-controllers-544bb4cb54-bn5qk\" (UID: \"b34ce3a0-ef81-40df-83fb-eff1fce094a1\") " pod="calico-system/calico-kube-controllers-544bb4cb54-bn5qk" Jul 15 23:14:33.065076 kubelet[3284]: I0715 23:14:33.065070 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ea3fe6c-3568-40fc-bb4d-6e2955010c16-config-volume\") pod \"coredns-7c65d6cfc9-4h2dn\" (UID: \"6ea3fe6c-3568-40fc-bb4d-6e2955010c16\") " pod="kube-system/coredns-7c65d6cfc9-4h2dn" Jul 15 23:14:33.065332 kubelet[3284]: I0715 23:14:33.065115 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ba49da8-b5ae-49f6-a962-a399f396556d-calico-apiserver-certs\") pod \"calico-apiserver-79c9578dc8-48pmd\" (UID: \"0ba49da8-b5ae-49f6-a962-a399f396556d\") " pod="calico-apiserver/calico-apiserver-79c9578dc8-48pmd" Jul 15 23:14:33.065332 kubelet[3284]: I0715 23:14:33.065180 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mctg\" (UniqueName: \"kubernetes.io/projected/6ea3fe6c-3568-40fc-bb4d-6e2955010c16-kube-api-access-4mctg\") pod \"coredns-7c65d6cfc9-4h2dn\" (UID: \"6ea3fe6c-3568-40fc-bb4d-6e2955010c16\") " pod="kube-system/coredns-7c65d6cfc9-4h2dn" Jul 15 23:14:33.065332 kubelet[3284]: I0715 23:14:33.065252 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzn4k\" (UniqueName: \"kubernetes.io/projected/5c6d9a04-36e2-4d83-83da-b77b4980d2aa-kube-api-access-hzn4k\") pod \"goldmane-58fd7646b9-dn8dz\" (UID: \"5c6d9a04-36e2-4d83-83da-b77b4980d2aa\") " pod="calico-system/goldmane-58fd7646b9-dn8dz" Jul 15 23:14:33.065491 kubelet[3284]: I0715 23:14:33.065304 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/375a0692-ef8f-4068-a807-b03338e74674-config-volume\") pod \"coredns-7c65d6cfc9-brdvw\" (UID: \"375a0692-ef8f-4068-a807-b03338e74674\") " pod="kube-system/coredns-7c65d6cfc9-brdvw" Jul 15 23:14:33.086516 systemd[1]: Created slice kubepods-besteffort-podb34ce3a0_ef81_40df_83fb_eff1fce094a1.slice - libcontainer container kubepods-besteffort-podb34ce3a0_ef81_40df_83fb_eff1fce094a1.slice. Jul 15 23:14:33.099648 kubelet[3284]: W0715 23:14:33.097707 3284 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ip-172-31-19-30" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-19-30' and this object Jul 15 23:14:33.099648 kubelet[3284]: E0715 23:14:33.097833 3284 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ip-172-31-19-30\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-19-30' and this object" logger="UnhandledError" Jul 15 23:14:33.099648 kubelet[3284]: W0715 23:14:33.098035 3284 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ip-172-31-19-30" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-19-30' and this object Jul 15 23:14:33.099648 kubelet[3284]: E0715 23:14:33.098838 3284 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ip-172-31-19-30\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-19-30' and this object" logger="UnhandledError" Jul 15 23:14:33.100355 kubelet[3284]: W0715 23:14:33.099455 3284 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ip-172-31-19-30" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-19-30' and this object Jul 15 23:14:33.101749 kubelet[3284]: E0715 23:14:33.101625 3284 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ip-172-31-19-30\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-19-30' and this object" logger="UnhandledError" Jul 15 23:14:33.104766 systemd[1]: Created slice kubepods-besteffort-pod0ba49da8_b5ae_49f6_a962_a399f396556d.slice - libcontainer container kubepods-besteffort-pod0ba49da8_b5ae_49f6_a962_a399f396556d.slice. Jul 15 23:14:33.145723 systemd[1]: Created slice kubepods-besteffort-pod5c6d9a04_36e2_4d83_83da_b77b4980d2aa.slice - libcontainer container kubepods-besteffort-pod5c6d9a04_36e2_4d83_83da_b77b4980d2aa.slice. Jul 15 23:14:33.171087 kubelet[3284]: I0715 23:14:33.169847 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d7145806-3918-48e2-8866-6debfb04a984-whisker-backend-key-pair\") pod \"whisker-745978599d-l47l2\" (UID: \"d7145806-3918-48e2-8866-6debfb04a984\") " pod="calico-system/whisker-745978599d-l47l2" Jul 15 23:14:33.171087 kubelet[3284]: I0715 23:14:33.170013 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rng4\" (UniqueName: \"kubernetes.io/projected/d7145806-3918-48e2-8866-6debfb04a984-kube-api-access-7rng4\") pod \"whisker-745978599d-l47l2\" (UID: \"d7145806-3918-48e2-8866-6debfb04a984\") " pod="calico-system/whisker-745978599d-l47l2" Jul 15 23:14:33.171087 kubelet[3284]: I0715 23:14:33.170057 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk2hb\" (UniqueName: \"kubernetes.io/projected/bf9bea21-8a82-469d-9185-c9d7f53ab6f9-kube-api-access-jk2hb\") pod \"calico-apiserver-79c9578dc8-lrwvt\" (UID: \"bf9bea21-8a82-469d-9185-c9d7f53ab6f9\") " pod="calico-apiserver/calico-apiserver-79c9578dc8-lrwvt" Jul 15 23:14:33.171087 kubelet[3284]: I0715 23:14:33.170143 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bf9bea21-8a82-469d-9185-c9d7f53ab6f9-calico-apiserver-certs\") pod \"calico-apiserver-79c9578dc8-lrwvt\" (UID: \"bf9bea21-8a82-469d-9185-c9d7f53ab6f9\") " pod="calico-apiserver/calico-apiserver-79c9578dc8-lrwvt" Jul 15 23:14:33.171087 kubelet[3284]: I0715 23:14:33.170207 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7145806-3918-48e2-8866-6debfb04a984-whisker-ca-bundle\") pod \"whisker-745978599d-l47l2\" (UID: \"d7145806-3918-48e2-8866-6debfb04a984\") " pod="calico-system/whisker-745978599d-l47l2" Jul 15 23:14:33.171431 systemd[1]: Created slice kubepods-besteffort-podd7145806_3918_48e2_8866_6debfb04a984.slice - libcontainer container kubepods-besteffort-podd7145806_3918_48e2_8866_6debfb04a984.slice. Jul 15 23:14:33.228473 systemd[1]: Created slice kubepods-besteffort-podbf9bea21_8a82_469d_9185_c9d7f53ab6f9.slice - libcontainer container kubepods-besteffort-podbf9bea21_8a82_469d_9185_c9d7f53ab6f9.slice. Jul 15 23:14:33.350949 containerd[1995]: time="2025-07-15T23:14:33.350524998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4h2dn,Uid:6ea3fe6c-3568-40fc-bb4d-6e2955010c16,Namespace:kube-system,Attempt:0,}" Jul 15 23:14:33.386879 containerd[1995]: time="2025-07-15T23:14:33.386804322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-brdvw,Uid:375a0692-ef8f-4068-a807-b03338e74674,Namespace:kube-system,Attempt:0,}" Jul 15 23:14:33.400116 containerd[1995]: time="2025-07-15T23:14:33.399524970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-544bb4cb54-bn5qk,Uid:b34ce3a0-ef81-40df-83fb-eff1fce094a1,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:33.432334 containerd[1995]: time="2025-07-15T23:14:33.432233358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c9578dc8-48pmd,Uid:0ba49da8-b5ae-49f6-a962-a399f396556d,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:14:33.495510 containerd[1995]: time="2025-07-15T23:14:33.495436542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745978599d-l47l2,Uid:d7145806-3918-48e2-8866-6debfb04a984,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:33.559196 containerd[1995]: time="2025-07-15T23:14:33.559053127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c9578dc8-lrwvt,Uid:bf9bea21-8a82-469d-9185-c9d7f53ab6f9,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:14:33.718095 containerd[1995]: time="2025-07-15T23:14:33.717905599Z" level=error msg="Failed to destroy network for sandbox \"515d1a2c58871dc3db2a0272e47c6dacb6ea238054fbea146b0253741eb13875\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:33.816425 containerd[1995]: time="2025-07-15T23:14:33.816074660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4h2dn,Uid:6ea3fe6c-3568-40fc-bb4d-6e2955010c16,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"515d1a2c58871dc3db2a0272e47c6dacb6ea238054fbea146b0253741eb13875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:33.819021 kubelet[3284]: E0715 23:14:33.818394 3284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515d1a2c58871dc3db2a0272e47c6dacb6ea238054fbea146b0253741eb13875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:33.819021 kubelet[3284]: E0715 23:14:33.818498 3284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515d1a2c58871dc3db2a0272e47c6dacb6ea238054fbea146b0253741eb13875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4h2dn" Jul 15 23:14:33.819021 kubelet[3284]: E0715 23:14:33.818535 3284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515d1a2c58871dc3db2a0272e47c6dacb6ea238054fbea146b0253741eb13875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4h2dn" Jul 15 23:14:33.821246 kubelet[3284]: E0715 23:14:33.818665 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4h2dn_kube-system(6ea3fe6c-3568-40fc-bb4d-6e2955010c16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4h2dn_kube-system(6ea3fe6c-3568-40fc-bb4d-6e2955010c16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"515d1a2c58871dc3db2a0272e47c6dacb6ea238054fbea146b0253741eb13875\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4h2dn" podUID="6ea3fe6c-3568-40fc-bb4d-6e2955010c16" Jul 15 23:14:34.119201 containerd[1995]: time="2025-07-15T23:14:34.118997489Z" level=error msg="Failed to destroy network for sandbox \"08365bf1151b736d8a8b5cd11f40e4c344974f5c1f54e83263a1fcf1e109c476\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.128145 systemd[1]: run-netns-cni\x2d46768305\x2d48ae\x2d0ebf\x2dd382\x2dde19acbe254d.mount: Deactivated successfully. Jul 15 23:14:34.132007 containerd[1995]: time="2025-07-15T23:14:34.130161473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-544bb4cb54-bn5qk,Uid:b34ce3a0-ef81-40df-83fb-eff1fce094a1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08365bf1151b736d8a8b5cd11f40e4c344974f5c1f54e83263a1fcf1e109c476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.133587 kubelet[3284]: E0715 23:14:34.133087 3284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08365bf1151b736d8a8b5cd11f40e4c344974f5c1f54e83263a1fcf1e109c476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.133587 kubelet[3284]: E0715 23:14:34.133180 3284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08365bf1151b736d8a8b5cd11f40e4c344974f5c1f54e83263a1fcf1e109c476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-544bb4cb54-bn5qk" Jul 15 23:14:34.133587 kubelet[3284]: E0715 23:14:34.133215 3284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08365bf1151b736d8a8b5cd11f40e4c344974f5c1f54e83263a1fcf1e109c476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-544bb4cb54-bn5qk" Jul 15 23:14:34.133995 kubelet[3284]: E0715 23:14:34.133280 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-544bb4cb54-bn5qk_calico-system(b34ce3a0-ef81-40df-83fb-eff1fce094a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-544bb4cb54-bn5qk_calico-system(b34ce3a0-ef81-40df-83fb-eff1fce094a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08365bf1151b736d8a8b5cd11f40e4c344974f5c1f54e83263a1fcf1e109c476\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-544bb4cb54-bn5qk" podUID="b34ce3a0-ef81-40df-83fb-eff1fce094a1" Jul 15 23:14:34.134133 containerd[1995]: time="2025-07-15T23:14:34.133709249Z" level=error msg="Failed to destroy network for sandbox \"85f7f2bc279d2eb893f7109861ccce6a479ad5bdae975e7fd429ec6fff3517fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.143002 systemd[1]: run-netns-cni\x2d77c686ef\x2d85a0\x2d3ca8\x2db27b\x2d4841ccf4735d.mount: Deactivated successfully. Jul 15 23:14:34.146601 containerd[1995]: time="2025-07-15T23:14:34.146306369Z" level=error msg="Failed to destroy network for sandbox \"f105bcd0e1c243aff1822f030025cf51a86510968fea7cbad6f0abe839fce2c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.156022 systemd[1]: run-netns-cni\x2dfbd2c05f\x2da9bf\x2d8094\x2db92d\x2d69303eb115fb.mount: Deactivated successfully. Jul 15 23:14:34.157022 containerd[1995]: time="2025-07-15T23:14:34.156109182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-brdvw,Uid:375a0692-ef8f-4068-a807-b03338e74674,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f7f2bc279d2eb893f7109861ccce6a479ad5bdae975e7fd429ec6fff3517fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.159851 kubelet[3284]: E0715 23:14:34.159716 3284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f7f2bc279d2eb893f7109861ccce6a479ad5bdae975e7fd429ec6fff3517fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.159851 kubelet[3284]: E0715 23:14:34.159811 3284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f7f2bc279d2eb893f7109861ccce6a479ad5bdae975e7fd429ec6fff3517fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-brdvw" Jul 15 23:14:34.159851 kubelet[3284]: E0715 23:14:34.159847 3284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f7f2bc279d2eb893f7109861ccce6a479ad5bdae975e7fd429ec6fff3517fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-brdvw" Jul 15 23:14:34.167977 kubelet[3284]: E0715 23:14:34.164103 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-brdvw_kube-system(375a0692-ef8f-4068-a807-b03338e74674)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-brdvw_kube-system(375a0692-ef8f-4068-a807-b03338e74674)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85f7f2bc279d2eb893f7109861ccce6a479ad5bdae975e7fd429ec6fff3517fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-brdvw" podUID="375a0692-ef8f-4068-a807-b03338e74674" Jul 15 23:14:34.172714 containerd[1995]: time="2025-07-15T23:14:34.171047382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c9578dc8-48pmd,Uid:0ba49da8-b5ae-49f6-a962-a399f396556d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f105bcd0e1c243aff1822f030025cf51a86510968fea7cbad6f0abe839fce2c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.175338 kubelet[3284]: E0715 23:14:34.175021 3284 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jul 15 23:14:34.177991 kubelet[3284]: E0715 23:14:34.176653 3284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f105bcd0e1c243aff1822f030025cf51a86510968fea7cbad6f0abe839fce2c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.177991 kubelet[3284]: E0715 23:14:34.176733 3284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f105bcd0e1c243aff1822f030025cf51a86510968fea7cbad6f0abe839fce2c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c9578dc8-48pmd" Jul 15 23:14:34.177991 kubelet[3284]: E0715 23:14:34.176765 3284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f105bcd0e1c243aff1822f030025cf51a86510968fea7cbad6f0abe839fce2c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c9578dc8-48pmd" Jul 15 23:14:34.178218 kubelet[3284]: E0715 23:14:34.176852 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79c9578dc8-48pmd_calico-apiserver(0ba49da8-b5ae-49f6-a962-a399f396556d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79c9578dc8-48pmd_calico-apiserver(0ba49da8-b5ae-49f6-a962-a399f396556d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f105bcd0e1c243aff1822f030025cf51a86510968fea7cbad6f0abe839fce2c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79c9578dc8-48pmd" podUID="0ba49da8-b5ae-49f6-a962-a399f396556d" Jul 15 23:14:34.179196 kubelet[3284]: E0715 23:14:34.179158 3284 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 15 23:14:34.180882 kubelet[3284]: E0715 23:14:34.179911 3284 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c6d9a04-36e2-4d83-83da-b77b4980d2aa-config podName:5c6d9a04-36e2-4d83-83da-b77b4980d2aa nodeName:}" failed. No retries permitted until 2025-07-15 23:14:34.675112694 +0000 UTC m=+38.740834531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5c6d9a04-36e2-4d83-83da-b77b4980d2aa-config") pod "goldmane-58fd7646b9-dn8dz" (UID: "5c6d9a04-36e2-4d83-83da-b77b4980d2aa") : failed to sync configmap cache: timed out waiting for the condition Jul 15 23:14:34.180882 kubelet[3284]: E0715 23:14:34.180216 3284 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c6d9a04-36e2-4d83-83da-b77b4980d2aa-goldmane-ca-bundle podName:5c6d9a04-36e2-4d83-83da-b77b4980d2aa nodeName:}" failed. No retries permitted until 2025-07-15 23:14:34.680187146 +0000 UTC m=+38.745908971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/5c6d9a04-36e2-4d83-83da-b77b4980d2aa-goldmane-ca-bundle") pod "goldmane-58fd7646b9-dn8dz" (UID: "5c6d9a04-36e2-4d83-83da-b77b4980d2aa") : failed to sync configmap cache: timed out waiting for the condition Jul 15 23:14:34.210141 systemd[1]: Created slice kubepods-besteffort-podfe57ee22_2543_410d_ab8c_77d8463dc034.slice - libcontainer container kubepods-besteffort-podfe57ee22_2543_410d_ab8c_77d8463dc034.slice. Jul 15 23:14:34.220204 containerd[1995]: time="2025-07-15T23:14:34.220094058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j48z9,Uid:fe57ee22-2543-410d-ab8c-77d8463dc034,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:34.231937 containerd[1995]: time="2025-07-15T23:14:34.231759378Z" level=error msg="Failed to destroy network for sandbox \"3e6b691046e7d4593ba9168531abf27252802bf08895e26af3d81a67097d56fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.235126 containerd[1995]: time="2025-07-15T23:14:34.235028934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c9578dc8-lrwvt,Uid:bf9bea21-8a82-469d-9185-c9d7f53ab6f9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e6b691046e7d4593ba9168531abf27252802bf08895e26af3d81a67097d56fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.236180 kubelet[3284]: E0715 23:14:34.236076 3284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e6b691046e7d4593ba9168531abf27252802bf08895e26af3d81a67097d56fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.236180 kubelet[3284]: E0715 23:14:34.236177 3284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e6b691046e7d4593ba9168531abf27252802bf08895e26af3d81a67097d56fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c9578dc8-lrwvt" Jul 15 23:14:34.236726 kubelet[3284]: E0715 23:14:34.236217 3284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e6b691046e7d4593ba9168531abf27252802bf08895e26af3d81a67097d56fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c9578dc8-lrwvt" Jul 15 23:14:34.236726 kubelet[3284]: E0715 23:14:34.236510 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79c9578dc8-lrwvt_calico-apiserver(bf9bea21-8a82-469d-9185-c9d7f53ab6f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79c9578dc8-lrwvt_calico-apiserver(bf9bea21-8a82-469d-9185-c9d7f53ab6f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e6b691046e7d4593ba9168531abf27252802bf08895e26af3d81a67097d56fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79c9578dc8-lrwvt" podUID="bf9bea21-8a82-469d-9185-c9d7f53ab6f9" Jul 15 23:14:34.250614 containerd[1995]: time="2025-07-15T23:14:34.250482114Z" level=error msg="Failed to destroy network for sandbox \"152fcb2458a557ae46d3c6041840f41e85f4a7150c4fa013d0f92b29b98673c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.253915 containerd[1995]: time="2025-07-15T23:14:34.253833414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745978599d-l47l2,Uid:d7145806-3918-48e2-8866-6debfb04a984,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"152fcb2458a557ae46d3c6041840f41e85f4a7150c4fa013d0f92b29b98673c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.254889 kubelet[3284]: E0715 23:14:34.254831 3284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"152fcb2458a557ae46d3c6041840f41e85f4a7150c4fa013d0f92b29b98673c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.255201 kubelet[3284]: E0715 23:14:34.255122 3284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"152fcb2458a557ae46d3c6041840f41e85f4a7150c4fa013d0f92b29b98673c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-745978599d-l47l2" Jul 15 23:14:34.255458 kubelet[3284]: E0715 23:14:34.255278 3284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"152fcb2458a557ae46d3c6041840f41e85f4a7150c4fa013d0f92b29b98673c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-745978599d-l47l2" Jul 15 23:14:34.255853 kubelet[3284]: E0715 23:14:34.255777 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-745978599d-l47l2_calico-system(d7145806-3918-48e2-8866-6debfb04a984)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-745978599d-l47l2_calico-system(d7145806-3918-48e2-8866-6debfb04a984)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"152fcb2458a557ae46d3c6041840f41e85f4a7150c4fa013d0f92b29b98673c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-745978599d-l47l2" podUID="d7145806-3918-48e2-8866-6debfb04a984" Jul 15 23:14:34.329227 containerd[1995]: time="2025-07-15T23:14:34.329109078Z" level=error msg="Failed to destroy network for sandbox \"4208710299757aa4e1c4656beaff9ab36e750aae203aea66e64ef58190bfe2ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.331849 containerd[1995]: time="2025-07-15T23:14:34.331765326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j48z9,Uid:fe57ee22-2543-410d-ab8c-77d8463dc034,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4208710299757aa4e1c4656beaff9ab36e750aae203aea66e64ef58190bfe2ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.332241 kubelet[3284]: E0715 23:14:34.332111 3284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4208710299757aa4e1c4656beaff9ab36e750aae203aea66e64ef58190bfe2ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:34.332241 kubelet[3284]: E0715 23:14:34.332190 3284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4208710299757aa4e1c4656beaff9ab36e750aae203aea66e64ef58190bfe2ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j48z9" Jul 15 23:14:34.332241 kubelet[3284]: E0715 23:14:34.332223 3284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4208710299757aa4e1c4656beaff9ab36e750aae203aea66e64ef58190bfe2ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j48z9" Jul 15 23:14:34.332440 kubelet[3284]: E0715 23:14:34.332297 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j48z9_calico-system(fe57ee22-2543-410d-ab8c-77d8463dc034)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j48z9_calico-system(fe57ee22-2543-410d-ab8c-77d8463dc034)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4208710299757aa4e1c4656beaff9ab36e750aae203aea66e64ef58190bfe2ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j48z9" podUID="fe57ee22-2543-410d-ab8c-77d8463dc034" Jul 15 23:14:34.479617 containerd[1995]: time="2025-07-15T23:14:34.477038023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 23:14:34.907836 systemd[1]: run-netns-cni\x2d576f18cc\x2d9071\x2d045c\x2dad9c\x2d6b46a77ea9d3.mount: Deactivated successfully. Jul 15 23:14:34.908049 systemd[1]: run-netns-cni\x2dd618e731\x2d96cd\x2d751e\x2d2a63\x2d8eb4d473c33c.mount: Deactivated successfully. Jul 15 23:14:34.965770 containerd[1995]: time="2025-07-15T23:14:34.965638294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-dn8dz,Uid:5c6d9a04-36e2-4d83-83da-b77b4980d2aa,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:35.077644 containerd[1995]: time="2025-07-15T23:14:35.077507694Z" level=error msg="Failed to destroy network for sandbox \"4564fab7b7a1df8eb4066ce70d3099d0cdcac5099a4c545d8fca825caee7d27a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:35.081412 systemd[1]: run-netns-cni\x2df6d53a08\x2d43cc\x2d5056\x2d2ae9\x2dbbd6273fed40.mount: Deactivated successfully. Jul 15 23:14:35.084694 containerd[1995]: time="2025-07-15T23:14:35.084136278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-dn8dz,Uid:5c6d9a04-36e2-4d83-83da-b77b4980d2aa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4564fab7b7a1df8eb4066ce70d3099d0cdcac5099a4c545d8fca825caee7d27a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:35.084899 kubelet[3284]: E0715 23:14:35.084634 3284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4564fab7b7a1df8eb4066ce70d3099d0cdcac5099a4c545d8fca825caee7d27a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:14:35.084899 kubelet[3284]: E0715 23:14:35.084708 3284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4564fab7b7a1df8eb4066ce70d3099d0cdcac5099a4c545d8fca825caee7d27a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-dn8dz" Jul 15 23:14:35.084899 kubelet[3284]: E0715 23:14:35.084749 3284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4564fab7b7a1df8eb4066ce70d3099d0cdcac5099a4c545d8fca825caee7d27a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-dn8dz" Jul 15 23:14:35.085523 kubelet[3284]: E0715 23:14:35.084813 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-dn8dz_calico-system(5c6d9a04-36e2-4d83-83da-b77b4980d2aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-dn8dz_calico-system(5c6d9a04-36e2-4d83-83da-b77b4980d2aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4564fab7b7a1df8eb4066ce70d3099d0cdcac5099a4c545d8fca825caee7d27a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-dn8dz" podUID="5c6d9a04-36e2-4d83-83da-b77b4980d2aa" Jul 15 23:14:40.898397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694466403.mount: Deactivated successfully. Jul 15 23:14:40.975879 containerd[1995]: time="2025-07-15T23:14:40.975682527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:40.977976 containerd[1995]: time="2025-07-15T23:14:40.977923035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 15 23:14:40.979491 containerd[1995]: time="2025-07-15T23:14:40.979429467Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:40.983916 containerd[1995]: time="2025-07-15T23:14:40.983834439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:40.985361 containerd[1995]: time="2025-07-15T23:14:40.985265379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.50801888s" Jul 15 23:14:40.985361 containerd[1995]: time="2025-07-15T23:14:40.985340151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 15 23:14:41.026797 containerd[1995]: time="2025-07-15T23:14:41.026748600Z" level=info msg="CreateContainer within sandbox \"1b65e81b8bad1077cb4158c47ff8316ed17615376f5650a1eec4f54965f42356\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 23:14:41.047061 containerd[1995]: time="2025-07-15T23:14:41.046992528Z" level=info msg="Container c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:41.077502 containerd[1995]: time="2025-07-15T23:14:41.077275788Z" level=info msg="CreateContainer within sandbox \"1b65e81b8bad1077cb4158c47ff8316ed17615376f5650a1eec4f54965f42356\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2\"" Jul 15 23:14:41.080442 containerd[1995]: time="2025-07-15T23:14:41.080374188Z" level=info msg="StartContainer for \"c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2\"" Jul 15 23:14:41.086888 containerd[1995]: time="2025-07-15T23:14:41.086523780Z" level=info msg="connecting to shim c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2" address="unix:///run/containerd/s/859fb013858bb8382ad9c0b6020b93cdbd969d550a5b5807064bc0711942ffc0" protocol=ttrpc version=3 Jul 15 23:14:41.123887 systemd[1]: Started cri-containerd-c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2.scope - libcontainer container c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2. Jul 15 23:14:41.231706 containerd[1995]: time="2025-07-15T23:14:41.231486685Z" level=info msg="StartContainer for \"c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2\" returns successfully" Jul 15 23:14:41.475299 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 23:14:41.475476 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 23:14:41.788664 kubelet[3284]: I0715 23:14:41.787045 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dmfmg" podStartSLOduration=1.7061338080000001 podStartE2EDuration="17.787015527s" podCreationTimestamp="2025-07-15 23:14:24 +0000 UTC" firstStartedPulling="2025-07-15 23:14:24.906996888 +0000 UTC m=+28.972718725" lastFinishedPulling="2025-07-15 23:14:40.987878619 +0000 UTC m=+45.053600444" observedRunningTime="2025-07-15 23:14:41.568800674 +0000 UTC m=+45.634522535" watchObservedRunningTime="2025-07-15 23:14:41.787015527 +0000 UTC m=+45.852737376" Jul 15 23:14:41.856439 kubelet[3284]: I0715 23:14:41.855989 3284 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7145806-3918-48e2-8866-6debfb04a984-whisker-ca-bundle\") pod \"d7145806-3918-48e2-8866-6debfb04a984\" (UID: \"d7145806-3918-48e2-8866-6debfb04a984\") " Jul 15 23:14:41.856439 kubelet[3284]: I0715 23:14:41.856185 3284 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d7145806-3918-48e2-8866-6debfb04a984-whisker-backend-key-pair\") pod \"d7145806-3918-48e2-8866-6debfb04a984\" (UID: \"d7145806-3918-48e2-8866-6debfb04a984\") " Jul 15 23:14:41.856439 kubelet[3284]: I0715 23:14:41.856238 3284 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rng4\" (UniqueName: \"kubernetes.io/projected/d7145806-3918-48e2-8866-6debfb04a984-kube-api-access-7rng4\") pod \"d7145806-3918-48e2-8866-6debfb04a984\" (UID: \"d7145806-3918-48e2-8866-6debfb04a984\") " Jul 15 23:14:41.860583 kubelet[3284]: I0715 23:14:41.859843 3284 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7145806-3918-48e2-8866-6debfb04a984-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d7145806-3918-48e2-8866-6debfb04a984" (UID: "d7145806-3918-48e2-8866-6debfb04a984"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 23:14:41.871737 kubelet[3284]: I0715 23:14:41.871650 3284 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7145806-3918-48e2-8866-6debfb04a984-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d7145806-3918-48e2-8866-6debfb04a984" (UID: "d7145806-3918-48e2-8866-6debfb04a984"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 23:14:41.872127 kubelet[3284]: I0715 23:14:41.872079 3284 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7145806-3918-48e2-8866-6debfb04a984-kube-api-access-7rng4" (OuterVolumeSpecName: "kube-api-access-7rng4") pod "d7145806-3918-48e2-8866-6debfb04a984" (UID: "d7145806-3918-48e2-8866-6debfb04a984"). InnerVolumeSpecName "kube-api-access-7rng4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 23:14:41.909072 containerd[1995]: time="2025-07-15T23:14:41.908987596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2\" id:\"420f42d7ae9b2147c507a2f91ce9b57859ffc73696416d19546542d943a622f7\" pid:4468 exit_status:1 exited_at:{seconds:1752621281 nanos:908029144}" Jul 15 23:14:41.909699 systemd[1]: var-lib-kubelet-pods-d7145806\x2d3918\x2d48e2\x2d8866\x2d6debfb04a984-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7rng4.mount: Deactivated successfully. Jul 15 23:14:41.910449 systemd[1]: var-lib-kubelet-pods-d7145806\x2d3918\x2d48e2\x2d8866\x2d6debfb04a984-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 23:14:41.956733 kubelet[3284]: I0715 23:14:41.956666 3284 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7145806-3918-48e2-8866-6debfb04a984-whisker-ca-bundle\") on node \"ip-172-31-19-30\" DevicePath \"\"" Jul 15 23:14:41.956733 kubelet[3284]: I0715 23:14:41.956731 3284 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d7145806-3918-48e2-8866-6debfb04a984-whisker-backend-key-pair\") on node \"ip-172-31-19-30\" DevicePath \"\"" Jul 15 23:14:41.956964 kubelet[3284]: I0715 23:14:41.956756 3284 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rng4\" (UniqueName: \"kubernetes.io/projected/d7145806-3918-48e2-8866-6debfb04a984-kube-api-access-7rng4\") on node \"ip-172-31-19-30\" DevicePath \"\"" Jul 15 23:14:42.194639 systemd[1]: Removed slice kubepods-besteffort-podd7145806_3918_48e2_8866_6debfb04a984.slice - libcontainer container kubepods-besteffort-podd7145806_3918_48e2_8866_6debfb04a984.slice. Jul 15 23:14:42.640392 systemd[1]: Created slice kubepods-besteffort-podf6368961_c39a_471f_af3d_a5f889aeb370.slice - libcontainer container kubepods-besteffort-podf6368961_c39a_471f_af3d_a5f889aeb370.slice. Jul 15 23:14:42.661755 kubelet[3284]: I0715 23:14:42.661664 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f6368961-c39a-471f-af3d-a5f889aeb370-whisker-backend-key-pair\") pod \"whisker-998b685db-jz8bw\" (UID: \"f6368961-c39a-471f-af3d-a5f889aeb370\") " pod="calico-system/whisker-998b685db-jz8bw" Jul 15 23:14:42.661890 kubelet[3284]: I0715 23:14:42.661813 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6368961-c39a-471f-af3d-a5f889aeb370-whisker-ca-bundle\") pod \"whisker-998b685db-jz8bw\" (UID: \"f6368961-c39a-471f-af3d-a5f889aeb370\") " pod="calico-system/whisker-998b685db-jz8bw" Jul 15 23:14:42.663770 kubelet[3284]: I0715 23:14:42.663671 3284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wpl8\" (UniqueName: \"kubernetes.io/projected/f6368961-c39a-471f-af3d-a5f889aeb370-kube-api-access-9wpl8\") pod \"whisker-998b685db-jz8bw\" (UID: \"f6368961-c39a-471f-af3d-a5f889aeb370\") " pod="calico-system/whisker-998b685db-jz8bw" Jul 15 23:14:42.770080 containerd[1995]: time="2025-07-15T23:14:42.769869124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2\" id:\"90bc9b8d8b789f8223b5b4f831a4ecfa6f8f20703aa23bd74968c8be3253781e\" pid:4512 exit_status:1 exited_at:{seconds:1752621282 nanos:769190116}" Jul 15 23:14:42.952468 containerd[1995]: time="2025-07-15T23:14:42.952265069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-998b685db-jz8bw,Uid:f6368961-c39a-471f-af3d-a5f889aeb370,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:43.266874 (udev-worker)[4451]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:14:43.269190 systemd-networkd[1867]: calid0e6abb05b8: Link UP Jul 15 23:14:43.272632 systemd-networkd[1867]: calid0e6abb05b8: Gained carrier Jul 15 23:14:43.319682 containerd[1995]: 2025-07-15 23:14:42.998 [INFO][4525] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:14:43.319682 containerd[1995]: 2025-07-15 23:14:43.087 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0 whisker-998b685db- calico-system f6368961-c39a-471f-af3d-a5f889aeb370 904 0 2025-07-15 23:14:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:998b685db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-19-30 whisker-998b685db-jz8bw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid0e6abb05b8 [] [] }} ContainerID="cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" Namespace="calico-system" Pod="whisker-998b685db-jz8bw" WorkloadEndpoint="ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-" Jul 15 23:14:43.319682 containerd[1995]: 2025-07-15 23:14:43.087 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" Namespace="calico-system" Pod="whisker-998b685db-jz8bw" WorkloadEndpoint="ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0" Jul 15 23:14:43.319682 containerd[1995]: 2025-07-15 23:14:43.174 [INFO][4537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" HandleID="k8s-pod-network.cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" Workload="ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0" Jul 15 23:14:43.320060 containerd[1995]: 2025-07-15 23:14:43.175 [INFO][4537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" HandleID="k8s-pod-network.cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" Workload="ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004de20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-30", "pod":"whisker-998b685db-jz8bw", "timestamp":"2025-07-15 23:14:43.174749366 +0000 UTC"}, Hostname:"ip-172-31-19-30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:43.320060 containerd[1995]: 2025-07-15 23:14:43.175 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:43.320060 containerd[1995]: 2025-07-15 23:14:43.175 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:43.320060 containerd[1995]: 2025-07-15 23:14:43.175 [INFO][4537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-30' Jul 15 23:14:43.320060 containerd[1995]: 2025-07-15 23:14:43.191 [INFO][4537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" host="ip-172-31-19-30" Jul 15 23:14:43.320060 containerd[1995]: 2025-07-15 23:14:43.200 [INFO][4537] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-30" Jul 15 23:14:43.320060 containerd[1995]: 2025-07-15 23:14:43.207 [INFO][4537] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:43.320060 containerd[1995]: 2025-07-15 23:14:43.210 [INFO][4537] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:43.320060 containerd[1995]: 2025-07-15 23:14:43.214 [INFO][4537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:43.320544 containerd[1995]: 2025-07-15 23:14:43.214 [INFO][4537] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" host="ip-172-31-19-30" Jul 15 23:14:43.320544 containerd[1995]: 2025-07-15 23:14:43.217 [INFO][4537] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c Jul 15 23:14:43.320544 containerd[1995]: 2025-07-15 23:14:43.223 [INFO][4537] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" host="ip-172-31-19-30" Jul 15 23:14:43.320544 containerd[1995]: 2025-07-15 23:14:43.232 [INFO][4537] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.193/26] block=192.168.109.192/26 handle="k8s-pod-network.cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" host="ip-172-31-19-30" Jul 15 23:14:43.320544 containerd[1995]: 2025-07-15 23:14:43.232 [INFO][4537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.193/26] handle="k8s-pod-network.cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" host="ip-172-31-19-30" Jul 15 23:14:43.320544 containerd[1995]: 2025-07-15 23:14:43.232 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:43.320544 containerd[1995]: 2025-07-15 23:14:43.232 [INFO][4537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.193/26] IPv6=[] ContainerID="cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" HandleID="k8s-pod-network.cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" Workload="ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0" Jul 15 23:14:43.323840 containerd[1995]: 2025-07-15 23:14:43.244 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" Namespace="calico-system" Pod="whisker-998b685db-jz8bw" WorkloadEndpoint="ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0", GenerateName:"whisker-998b685db-", Namespace:"calico-system", SelfLink:"", UID:"f6368961-c39a-471f-af3d-a5f889aeb370", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"998b685db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"", Pod:"whisker-998b685db-jz8bw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid0e6abb05b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:43.323840 containerd[1995]: 2025-07-15 23:14:43.244 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.193/32] ContainerID="cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" Namespace="calico-system" Pod="whisker-998b685db-jz8bw" WorkloadEndpoint="ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0" Jul 15 23:14:43.324124 containerd[1995]: 2025-07-15 23:14:43.245 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0e6abb05b8 ContainerID="cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" Namespace="calico-system" Pod="whisker-998b685db-jz8bw" WorkloadEndpoint="ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0" Jul 15 23:14:43.324124 containerd[1995]: 2025-07-15 23:14:43.271 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" Namespace="calico-system" Pod="whisker-998b685db-jz8bw" WorkloadEndpoint="ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0" Jul 15 23:14:43.324436 containerd[1995]: 2025-07-15 23:14:43.274 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" Namespace="calico-system" Pod="whisker-998b685db-jz8bw" WorkloadEndpoint="ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0", GenerateName:"whisker-998b685db-", Namespace:"calico-system", SelfLink:"", UID:"f6368961-c39a-471f-af3d-a5f889aeb370", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"998b685db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c", Pod:"whisker-998b685db-jz8bw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid0e6abb05b8", MAC:"da:4f:7e:a9:b9:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:43.324601 containerd[1995]: 2025-07-15 23:14:43.314 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" Namespace="calico-system" Pod="whisker-998b685db-jz8bw" WorkloadEndpoint="ip--172--31--19--30-k8s-whisker--998b685db--jz8bw-eth0" Jul 15 23:14:43.392861 containerd[1995]: time="2025-07-15T23:14:43.392783643Z" level=info msg="connecting to shim cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c" address="unix:///run/containerd/s/ffc036774f938398f16a62aa546cba1f8019e77107994b0da334dd46f5fe7d6c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:43.482013 systemd[1]: Started cri-containerd-cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c.scope - libcontainer container cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c. Jul 15 23:14:43.650785 containerd[1995]: time="2025-07-15T23:14:43.650662565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-998b685db-jz8bw,Uid:f6368961-c39a-471f-af3d-a5f889aeb370,Namespace:calico-system,Attempt:0,} returns sandbox id \"cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c\"" Jul 15 23:14:43.657704 containerd[1995]: time="2025-07-15T23:14:43.657305537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 23:14:44.180182 kubelet[3284]: I0715 23:14:44.180115 3284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7145806-3918-48e2-8866-6debfb04a984" path="/var/lib/kubelet/pods/d7145806-3918-48e2-8866-6debfb04a984/volumes" Jul 15 23:14:44.644862 systemd-networkd[1867]: calid0e6abb05b8: Gained IPv6LL Jul 15 23:14:44.671294 systemd-networkd[1867]: vxlan.calico: Link UP Jul 15 23:14:44.671315 systemd-networkd[1867]: vxlan.calico: Gained carrier Jul 15 23:14:44.723034 (udev-worker)[4450]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:14:45.068617 containerd[1995]: time="2025-07-15T23:14:45.068150104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:45.071812 containerd[1995]: time="2025-07-15T23:14:45.071719048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 15 23:14:45.074458 containerd[1995]: time="2025-07-15T23:14:45.074159332Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:45.079612 containerd[1995]: time="2025-07-15T23:14:45.079083184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:45.085098 containerd[1995]: time="2025-07-15T23:14:45.084962428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.426261651s" Jul 15 23:14:45.085098 containerd[1995]: time="2025-07-15T23:14:45.085047772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 15 23:14:45.093932 containerd[1995]: time="2025-07-15T23:14:45.093857476Z" level=info msg="CreateContainer within sandbox \"cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 23:14:45.122614 containerd[1995]: time="2025-07-15T23:14:45.122239816Z" level=info msg="Container b13ff339b99aea937be6c28423578931d694631128761fabeb0a76f9660f46db: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:45.149304 containerd[1995]: time="2025-07-15T23:14:45.148360960Z" level=info msg="CreateContainer within sandbox \"cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b13ff339b99aea937be6c28423578931d694631128761fabeb0a76f9660f46db\"" Jul 15 23:14:45.151849 containerd[1995]: time="2025-07-15T23:14:45.151785100Z" level=info msg="StartContainer for \"b13ff339b99aea937be6c28423578931d694631128761fabeb0a76f9660f46db\"" Jul 15 23:14:45.157430 containerd[1995]: time="2025-07-15T23:14:45.157360660Z" level=info msg="connecting to shim b13ff339b99aea937be6c28423578931d694631128761fabeb0a76f9660f46db" address="unix:///run/containerd/s/ffc036774f938398f16a62aa546cba1f8019e77107994b0da334dd46f5fe7d6c" protocol=ttrpc version=3 Jul 15 23:14:45.223240 systemd[1]: Started cri-containerd-b13ff339b99aea937be6c28423578931d694631128761fabeb0a76f9660f46db.scope - libcontainer container b13ff339b99aea937be6c28423578931d694631128761fabeb0a76f9660f46db. Jul 15 23:14:45.328058 containerd[1995]: time="2025-07-15T23:14:45.327859001Z" level=info msg="StartContainer for \"b13ff339b99aea937be6c28423578931d694631128761fabeb0a76f9660f46db\" returns successfully" Jul 15 23:14:45.333186 containerd[1995]: time="2025-07-15T23:14:45.333126437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 23:14:45.922770 systemd-networkd[1867]: vxlan.calico: Gained IPv6LL Jul 15 23:14:46.176941 containerd[1995]: time="2025-07-15T23:14:46.176764133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c9578dc8-lrwvt,Uid:bf9bea21-8a82-469d-9185-c9d7f53ab6f9,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:14:46.413285 systemd-networkd[1867]: cali62da5198428: Link UP Jul 15 23:14:46.418290 systemd-networkd[1867]: cali62da5198428: Gained carrier Jul 15 23:14:46.462956 containerd[1995]: 2025-07-15 23:14:46.266 [INFO][4835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0 calico-apiserver-79c9578dc8- calico-apiserver bf9bea21-8a82-469d-9185-c9d7f53ab6f9 833 0 2025-07-15 23:14:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79c9578dc8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-30 calico-apiserver-79c9578dc8-lrwvt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali62da5198428 [] [] }} ContainerID="cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-lrwvt" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-" Jul 15 23:14:46.462956 containerd[1995]: 2025-07-15 23:14:46.266 [INFO][4835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-lrwvt" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0" Jul 15 23:14:46.462956 containerd[1995]: 2025-07-15 23:14:46.321 [INFO][4847] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" HandleID="k8s-pod-network.cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" Workload="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0" Jul 15 23:14:46.463370 containerd[1995]: 2025-07-15 23:14:46.322 [INFO][4847] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" HandleID="k8s-pod-network.cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" Workload="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-30", "pod":"calico-apiserver-79c9578dc8-lrwvt", "timestamp":"2025-07-15 23:14:46.321868206 +0000 UTC"}, Hostname:"ip-172-31-19-30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:46.463370 containerd[1995]: 2025-07-15 23:14:46.322 [INFO][4847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:46.463370 containerd[1995]: 2025-07-15 23:14:46.322 [INFO][4847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:46.463370 containerd[1995]: 2025-07-15 23:14:46.322 [INFO][4847] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-30' Jul 15 23:14:46.463370 containerd[1995]: 2025-07-15 23:14:46.337 [INFO][4847] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" host="ip-172-31-19-30" Jul 15 23:14:46.463370 containerd[1995]: 2025-07-15 23:14:46.345 [INFO][4847] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-30" Jul 15 23:14:46.463370 containerd[1995]: 2025-07-15 23:14:46.355 [INFO][4847] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:46.463370 containerd[1995]: 2025-07-15 23:14:46.359 [INFO][4847] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:46.463370 containerd[1995]: 2025-07-15 23:14:46.363 [INFO][4847] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:46.463891 containerd[1995]: 2025-07-15 23:14:46.363 [INFO][4847] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" host="ip-172-31-19-30" Jul 15 23:14:46.463891 containerd[1995]: 2025-07-15 23:14:46.365 [INFO][4847] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28 Jul 15 23:14:46.463891 containerd[1995]: 2025-07-15 23:14:46.374 [INFO][4847] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" host="ip-172-31-19-30" Jul 15 23:14:46.463891 containerd[1995]: 2025-07-15 23:14:46.391 [INFO][4847] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.194/26] block=192.168.109.192/26 handle="k8s-pod-network.cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" host="ip-172-31-19-30" Jul 15 23:14:46.463891 containerd[1995]: 2025-07-15 23:14:46.391 [INFO][4847] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.194/26] handle="k8s-pod-network.cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" host="ip-172-31-19-30" Jul 15 23:14:46.463891 containerd[1995]: 2025-07-15 23:14:46.391 [INFO][4847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:46.463891 containerd[1995]: 2025-07-15 23:14:46.392 [INFO][4847] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.194/26] IPv6=[] ContainerID="cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" HandleID="k8s-pod-network.cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" Workload="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0" Jul 15 23:14:46.464209 containerd[1995]: 2025-07-15 23:14:46.399 [INFO][4835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-lrwvt" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0", GenerateName:"calico-apiserver-79c9578dc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf9bea21-8a82-469d-9185-c9d7f53ab6f9", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c9578dc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"", Pod:"calico-apiserver-79c9578dc8-lrwvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62da5198428", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:46.464340 containerd[1995]: 2025-07-15 23:14:46.399 [INFO][4835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.194/32] ContainerID="cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-lrwvt" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0" Jul 15 23:14:46.464340 containerd[1995]: 2025-07-15 23:14:46.400 [INFO][4835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62da5198428 ContainerID="cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-lrwvt" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0" Jul 15 23:14:46.464340 containerd[1995]: 2025-07-15 23:14:46.423 [INFO][4835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-lrwvt" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0" Jul 15 23:14:46.464529 containerd[1995]: 2025-07-15 23:14:46.432 [INFO][4835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-lrwvt" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0", GenerateName:"calico-apiserver-79c9578dc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf9bea21-8a82-469d-9185-c9d7f53ab6f9", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c9578dc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28", Pod:"calico-apiserver-79c9578dc8-lrwvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62da5198428", MAC:"ce:d9:4f:4b:83:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:46.467645 containerd[1995]: 2025-07-15 23:14:46.451 [INFO][4835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-lrwvt" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--lrwvt-eth0" Jul 15 23:14:46.577658 containerd[1995]: time="2025-07-15T23:14:46.577543771Z" level=info msg="connecting to shim cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28" address="unix:///run/containerd/s/c740cf83b517cfdbd4a0819ebeaa179d31eab90a57083fdedf08ed7af36406c4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:46.666042 systemd[1]: Started cri-containerd-cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28.scope - libcontainer container cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28. Jul 15 23:14:46.995503 containerd[1995]: time="2025-07-15T23:14:46.995443125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c9578dc8-lrwvt,Uid:bf9bea21-8a82-469d-9185-c9d7f53ab6f9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28\"" Jul 15 23:14:47.792455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975813872.mount: Deactivated successfully. Jul 15 23:14:47.865010 containerd[1995]: time="2025-07-15T23:14:47.863665354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:47.867282 containerd[1995]: time="2025-07-15T23:14:47.867227254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 15 23:14:47.869425 containerd[1995]: time="2025-07-15T23:14:47.869372650Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:47.880420 containerd[1995]: time="2025-07-15T23:14:47.880252654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:47.883459 containerd[1995]: time="2025-07-15T23:14:47.883388794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.550200245s" Jul 15 23:14:47.883459 containerd[1995]: time="2025-07-15T23:14:47.883453618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 15 23:14:47.891359 containerd[1995]: time="2025-07-15T23:14:47.891203254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:14:47.901134 containerd[1995]: time="2025-07-15T23:14:47.901042666Z" level=info msg="CreateContainer within sandbox \"cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 23:14:47.939035 containerd[1995]: time="2025-07-15T23:14:47.938949346Z" level=info msg="Container 1bc81b9497d8e1078bcaeeaf6a8475713ffb0eae99d472dc8f2dbce9a08e80b4: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:47.960067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020022071.mount: Deactivated successfully. Jul 15 23:14:47.999064 containerd[1995]: time="2025-07-15T23:14:47.998940154Z" level=info msg="CreateContainer within sandbox \"cef228d2c4f664d7dbdc77e292a788576e654b697745b114e8c18e95282d1a9c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1bc81b9497d8e1078bcaeeaf6a8475713ffb0eae99d472dc8f2dbce9a08e80b4\"" Jul 15 23:14:48.001635 containerd[1995]: time="2025-07-15T23:14:48.000613182Z" level=info msg="StartContainer for \"1bc81b9497d8e1078bcaeeaf6a8475713ffb0eae99d472dc8f2dbce9a08e80b4\"" Jul 15 23:14:48.004061 containerd[1995]: time="2025-07-15T23:14:48.004005438Z" level=info msg="connecting to shim 1bc81b9497d8e1078bcaeeaf6a8475713ffb0eae99d472dc8f2dbce9a08e80b4" address="unix:///run/containerd/s/ffc036774f938398f16a62aa546cba1f8019e77107994b0da334dd46f5fe7d6c" protocol=ttrpc version=3 Jul 15 23:14:48.069934 systemd[1]: Started cri-containerd-1bc81b9497d8e1078bcaeeaf6a8475713ffb0eae99d472dc8f2dbce9a08e80b4.scope - libcontainer container 1bc81b9497d8e1078bcaeeaf6a8475713ffb0eae99d472dc8f2dbce9a08e80b4. Jul 15 23:14:48.178589 containerd[1995]: time="2025-07-15T23:14:48.177869299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c9578dc8-48pmd,Uid:0ba49da8-b5ae-49f6-a962-a399f396556d,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:14:48.184591 containerd[1995]: time="2025-07-15T23:14:48.182702755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-dn8dz,Uid:5c6d9a04-36e2-4d83-83da-b77b4980d2aa,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:48.192594 containerd[1995]: time="2025-07-15T23:14:48.191848987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4h2dn,Uid:6ea3fe6c-3568-40fc-bb4d-6e2955010c16,Namespace:kube-system,Attempt:0,}" Jul 15 23:14:48.192594 containerd[1995]: time="2025-07-15T23:14:48.192090607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j48z9,Uid:fe57ee22-2543-410d-ab8c-77d8463dc034,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:48.195148 containerd[1995]: time="2025-07-15T23:14:48.195102307Z" level=info msg="StartContainer for \"1bc81b9497d8e1078bcaeeaf6a8475713ffb0eae99d472dc8f2dbce9a08e80b4\" returns successfully" Jul 15 23:14:48.418764 systemd-networkd[1867]: cali62da5198428: Gained IPv6LL Jul 15 23:14:48.621334 kubelet[3284]: I0715 23:14:48.621238 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-998b685db-jz8bw" podStartSLOduration=2.386938768 podStartE2EDuration="6.621209889s" podCreationTimestamp="2025-07-15 23:14:42 +0000 UTC" firstStartedPulling="2025-07-15 23:14:43.656709437 +0000 UTC m=+47.722431262" lastFinishedPulling="2025-07-15 23:14:47.890980558 +0000 UTC m=+51.956702383" observedRunningTime="2025-07-15 23:14:48.619152141 +0000 UTC m=+52.684874074" watchObservedRunningTime="2025-07-15 23:14:48.621209889 +0000 UTC m=+52.686931726" Jul 15 23:14:48.824767 systemd-networkd[1867]: cali9638e379b08: Link UP Jul 15 23:14:48.825177 systemd-networkd[1867]: cali9638e379b08: Gained carrier Jul 15 23:14:48.872656 containerd[1995]: 2025-07-15 23:14:48.446 [INFO][4973] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0 csi-node-driver- calico-system fe57ee22-2543-410d-ab8c-77d8463dc034 719 0 2025-07-15 23:14:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-30 csi-node-driver-j48z9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9638e379b08 [] [] }} ContainerID="18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" Namespace="calico-system" Pod="csi-node-driver-j48z9" WorkloadEndpoint="ip--172--31--19--30-k8s-csi--node--driver--j48z9-" Jul 15 23:14:48.872656 containerd[1995]: 2025-07-15 23:14:48.449 [INFO][4973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" Namespace="calico-system" Pod="csi-node-driver-j48z9" WorkloadEndpoint="ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0" Jul 15 23:14:48.872656 containerd[1995]: 2025-07-15 23:14:48.654 [INFO][5004] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" HandleID="k8s-pod-network.18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" Workload="ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0" Jul 15 23:14:48.873398 containerd[1995]: 2025-07-15 23:14:48.656 [INFO][5004] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" HandleID="k8s-pod-network.18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" Workload="ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000348140), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-30", "pod":"csi-node-driver-j48z9", "timestamp":"2025-07-15 23:14:48.654460822 +0000 UTC"}, Hostname:"ip-172-31-19-30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:48.873398 containerd[1995]: 2025-07-15 23:14:48.656 [INFO][5004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:48.873398 containerd[1995]: 2025-07-15 23:14:48.656 [INFO][5004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:48.873398 containerd[1995]: 2025-07-15 23:14:48.656 [INFO][5004] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-30' Jul 15 23:14:48.873398 containerd[1995]: 2025-07-15 23:14:48.711 [INFO][5004] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" host="ip-172-31-19-30" Jul 15 23:14:48.873398 containerd[1995]: 2025-07-15 23:14:48.725 [INFO][5004] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-30" Jul 15 23:14:48.873398 containerd[1995]: 2025-07-15 23:14:48.744 [INFO][5004] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:48.873398 containerd[1995]: 2025-07-15 23:14:48.749 [INFO][5004] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:48.873398 containerd[1995]: 2025-07-15 23:14:48.756 [INFO][5004] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:48.875133 containerd[1995]: 2025-07-15 23:14:48.756 [INFO][5004] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" host="ip-172-31-19-30" Jul 15 23:14:48.875133 containerd[1995]: 2025-07-15 23:14:48.760 [INFO][5004] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b Jul 15 23:14:48.875133 containerd[1995]: 2025-07-15 23:14:48.773 [INFO][5004] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" host="ip-172-31-19-30" Jul 15 23:14:48.875133 containerd[1995]: 2025-07-15 23:14:48.800 [INFO][5004] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.195/26] block=192.168.109.192/26 handle="k8s-pod-network.18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" host="ip-172-31-19-30" Jul 15 23:14:48.875133 containerd[1995]: 2025-07-15 23:14:48.800 [INFO][5004] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.195/26] handle="k8s-pod-network.18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" host="ip-172-31-19-30" Jul 15 23:14:48.875133 containerd[1995]: 2025-07-15 23:14:48.801 [INFO][5004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:48.875133 containerd[1995]: 2025-07-15 23:14:48.801 [INFO][5004] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.195/26] IPv6=[] ContainerID="18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" HandleID="k8s-pod-network.18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" Workload="ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0" Jul 15 23:14:48.875449 containerd[1995]: 2025-07-15 23:14:48.816 [INFO][4973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" Namespace="calico-system" Pod="csi-node-driver-j48z9" WorkloadEndpoint="ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe57ee22-2543-410d-ab8c-77d8463dc034", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"", Pod:"csi-node-driver-j48z9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9638e379b08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:48.877372 containerd[1995]: 2025-07-15 23:14:48.816 [INFO][4973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.195/32] ContainerID="18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" Namespace="calico-system" Pod="csi-node-driver-j48z9" WorkloadEndpoint="ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0" Jul 15 23:14:48.877372 containerd[1995]: 2025-07-15 23:14:48.816 [INFO][4973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9638e379b08 ContainerID="18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" Namespace="calico-system" Pod="csi-node-driver-j48z9" WorkloadEndpoint="ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0" Jul 15 23:14:48.877372 containerd[1995]: 2025-07-15 23:14:48.824 [INFO][4973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" Namespace="calico-system" Pod="csi-node-driver-j48z9" WorkloadEndpoint="ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0" Jul 15 23:14:48.877582 containerd[1995]: 2025-07-15 23:14:48.825 [INFO][4973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" Namespace="calico-system" Pod="csi-node-driver-j48z9" WorkloadEndpoint="ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fe57ee22-2543-410d-ab8c-77d8463dc034", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b", Pod:"csi-node-driver-j48z9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9638e379b08", MAC:"06:6e:79:0c:c3:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:48.877745 containerd[1995]: 2025-07-15 23:14:48.857 [INFO][4973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" Namespace="calico-system" Pod="csi-node-driver-j48z9" WorkloadEndpoint="ip--172--31--19--30-k8s-csi--node--driver--j48z9-eth0" Jul 15 23:14:49.000014 systemd-networkd[1867]: calia274ddf4bfa: Link UP Jul 15 23:14:49.005411 systemd-networkd[1867]: calia274ddf4bfa: Gained carrier Jul 15 23:14:49.008801 containerd[1995]: time="2025-07-15T23:14:49.008404747Z" level=info msg="connecting to shim 18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b" address="unix:///run/containerd/s/e6b41c41c50bed968cbe6c7b197e19ca22484bcc9ddd9c5f28896d4306558b25" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:49.073862 containerd[1995]: 2025-07-15 23:14:48.488 [INFO][4963] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0 calico-apiserver-79c9578dc8- calico-apiserver 0ba49da8-b5ae-49f6-a962-a399f396556d 832 0 2025-07-15 23:14:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79c9578dc8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-30 calico-apiserver-79c9578dc8-48pmd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia274ddf4bfa [] [] }} ContainerID="ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-48pmd" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-" Jul 15 23:14:49.073862 containerd[1995]: 2025-07-15 23:14:48.495 [INFO][4963] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-48pmd" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0" Jul 15 23:14:49.073862 containerd[1995]: 2025-07-15 23:14:48.653 [INFO][5019] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" HandleID="k8s-pod-network.ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" Workload="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0" Jul 15 23:14:49.074305 containerd[1995]: 2025-07-15 23:14:48.657 [INFO][5019] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" HandleID="k8s-pod-network.ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" Workload="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027a430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-30", "pod":"calico-apiserver-79c9578dc8-48pmd", "timestamp":"2025-07-15 23:14:48.65328139 +0000 UTC"}, Hostname:"ip-172-31-19-30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:49.074305 containerd[1995]: 2025-07-15 23:14:48.659 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:49.074305 containerd[1995]: 2025-07-15 23:14:48.800 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:49.074305 containerd[1995]: 2025-07-15 23:14:48.801 [INFO][5019] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-30' Jul 15 23:14:49.074305 containerd[1995]: 2025-07-15 23:14:48.852 [INFO][5019] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" host="ip-172-31-19-30" Jul 15 23:14:49.074305 containerd[1995]: 2025-07-15 23:14:48.878 [INFO][5019] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-30" Jul 15 23:14:49.074305 containerd[1995]: 2025-07-15 23:14:48.908 [INFO][5019] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:49.074305 containerd[1995]: 2025-07-15 23:14:48.914 [INFO][5019] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:49.074305 containerd[1995]: 2025-07-15 23:14:48.920 [INFO][5019] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:49.075291 containerd[1995]: 2025-07-15 23:14:48.921 [INFO][5019] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" host="ip-172-31-19-30" Jul 15 23:14:49.075291 containerd[1995]: 2025-07-15 23:14:48.926 [INFO][5019] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1 Jul 15 23:14:49.075291 containerd[1995]: 2025-07-15 23:14:48.936 [INFO][5019] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" host="ip-172-31-19-30" Jul 15 23:14:49.075291 containerd[1995]: 2025-07-15 23:14:48.966 [INFO][5019] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.196/26] block=192.168.109.192/26 handle="k8s-pod-network.ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" host="ip-172-31-19-30" Jul 15 23:14:49.075291 containerd[1995]: 2025-07-15 23:14:48.967 [INFO][5019] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.196/26] handle="k8s-pod-network.ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" host="ip-172-31-19-30" Jul 15 23:14:49.075291 containerd[1995]: 2025-07-15 23:14:48.967 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:49.075291 containerd[1995]: 2025-07-15 23:14:48.967 [INFO][5019] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.196/26] IPv6=[] ContainerID="ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" HandleID="k8s-pod-network.ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" Workload="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0" Jul 15 23:14:49.076119 containerd[1995]: 2025-07-15 23:14:48.978 [INFO][4963] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-48pmd" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0", GenerateName:"calico-apiserver-79c9578dc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ba49da8-b5ae-49f6-a962-a399f396556d", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c9578dc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"", Pod:"calico-apiserver-79c9578dc8-48pmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia274ddf4bfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:49.076264 containerd[1995]: 2025-07-15 23:14:48.979 [INFO][4963] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.196/32] ContainerID="ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-48pmd" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0" Jul 15 23:14:49.076264 containerd[1995]: 2025-07-15 23:14:48.979 [INFO][4963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia274ddf4bfa ContainerID="ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-48pmd" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0" Jul 15 23:14:49.076264 containerd[1995]: 2025-07-15 23:14:49.010 [INFO][4963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-48pmd" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0" Jul 15 23:14:49.076841 containerd[1995]: 2025-07-15 23:14:49.011 [INFO][4963] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-48pmd" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0", GenerateName:"calico-apiserver-79c9578dc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ba49da8-b5ae-49f6-a962-a399f396556d", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c9578dc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1", Pod:"calico-apiserver-79c9578dc8-48pmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia274ddf4bfa", MAC:"26:c3:5f:80:83:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:49.076974 containerd[1995]: 2025-07-15 23:14:49.067 [INFO][4963] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" Namespace="calico-apiserver" Pod="calico-apiserver-79c9578dc8-48pmd" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--apiserver--79c9578dc8--48pmd-eth0" Jul 15 23:14:49.112180 systemd[1]: Started cri-containerd-18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b.scope - libcontainer container 18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b. Jul 15 23:14:49.179584 containerd[1995]: time="2025-07-15T23:14:49.179436296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-544bb4cb54-bn5qk,Uid:b34ce3a0-ef81-40df-83fb-eff1fce094a1,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:49.180935 containerd[1995]: time="2025-07-15T23:14:49.180824036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-brdvw,Uid:375a0692-ef8f-4068-a807-b03338e74674,Namespace:kube-system,Attempt:0,}" Jul 15 23:14:49.287823 containerd[1995]: time="2025-07-15T23:14:49.286229613Z" level=info msg="connecting to shim ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1" address="unix:///run/containerd/s/7e47b42da614a5911e1cef5d34867a7867f77eab319f33963d47d45847caa033" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:49.306238 systemd-networkd[1867]: cali418c7ec1c05: Link UP Jul 15 23:14:49.309134 systemd-networkd[1867]: cali418c7ec1c05: Gained carrier Jul 15 23:14:49.393004 containerd[1995]: 2025-07-15 23:14:48.453 [INFO][4948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0 goldmane-58fd7646b9- calico-system 5c6d9a04-36e2-4d83-83da-b77b4980d2aa 836 0 2025-07-15 23:14:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-19-30 goldmane-58fd7646b9-dn8dz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali418c7ec1c05 [] [] }} ContainerID="8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-dn8dz" WorkloadEndpoint="ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-" Jul 15 23:14:49.393004 containerd[1995]: 2025-07-15 23:14:48.454 [INFO][4948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-dn8dz" WorkloadEndpoint="ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0" Jul 15 23:14:49.393004 containerd[1995]: 2025-07-15 23:14:48.672 [INFO][5009] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" HandleID="k8s-pod-network.8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" Workload="ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0" Jul 15 23:14:49.393607 containerd[1995]: 2025-07-15 23:14:48.672 [INFO][5009] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" HandleID="k8s-pod-network.8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" Workload="ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000370150), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-30", "pod":"goldmane-58fd7646b9-dn8dz", "timestamp":"2025-07-15 23:14:48.672715834 +0000 UTC"}, Hostname:"ip-172-31-19-30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:49.393607 containerd[1995]: 2025-07-15 23:14:48.673 [INFO][5009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:49.393607 containerd[1995]: 2025-07-15 23:14:48.967 [INFO][5009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:49.393607 containerd[1995]: 2025-07-15 23:14:48.967 [INFO][5009] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-30' Jul 15 23:14:49.393607 containerd[1995]: 2025-07-15 23:14:49.017 [INFO][5009] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" host="ip-172-31-19-30" Jul 15 23:14:49.393607 containerd[1995]: 2025-07-15 23:14:49.043 [INFO][5009] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-30" Jul 15 23:14:49.393607 containerd[1995]: 2025-07-15 23:14:49.089 [INFO][5009] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:49.393607 containerd[1995]: 2025-07-15 23:14:49.109 [INFO][5009] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:49.393607 containerd[1995]: 2025-07-15 23:14:49.123 [INFO][5009] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:49.394386 containerd[1995]: 2025-07-15 23:14:49.124 [INFO][5009] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" host="ip-172-31-19-30" Jul 15 23:14:49.394386 containerd[1995]: 2025-07-15 23:14:49.130 [INFO][5009] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef Jul 15 23:14:49.394386 containerd[1995]: 2025-07-15 23:14:49.145 [INFO][5009] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" host="ip-172-31-19-30" Jul 15 23:14:49.394386 containerd[1995]: 2025-07-15 23:14:49.188 [INFO][5009] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.197/26] block=192.168.109.192/26 handle="k8s-pod-network.8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" host="ip-172-31-19-30" Jul 15 23:14:49.394386 containerd[1995]: 2025-07-15 23:14:49.191 [INFO][5009] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.197/26] handle="k8s-pod-network.8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" host="ip-172-31-19-30" Jul 15 23:14:49.394386 containerd[1995]: 2025-07-15 23:14:49.195 [INFO][5009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:49.394386 containerd[1995]: 2025-07-15 23:14:49.199 [INFO][5009] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.197/26] IPv6=[] ContainerID="8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" HandleID="k8s-pod-network.8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" Workload="ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0" Jul 15 23:14:49.395303 containerd[1995]: 2025-07-15 23:14:49.231 [INFO][4948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-dn8dz" WorkloadEndpoint="ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5c6d9a04-36e2-4d83-83da-b77b4980d2aa", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"", Pod:"goldmane-58fd7646b9-dn8dz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali418c7ec1c05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:49.395303 containerd[1995]: 2025-07-15 23:14:49.238 [INFO][4948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.197/32] ContainerID="8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-dn8dz" WorkloadEndpoint="ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0" Jul 15 23:14:49.396067 containerd[1995]: 2025-07-15 23:14:49.247 [INFO][4948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali418c7ec1c05 ContainerID="8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-dn8dz" WorkloadEndpoint="ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0" Jul 15 23:14:49.396067 containerd[1995]: 2025-07-15 23:14:49.318 [INFO][4948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-dn8dz" WorkloadEndpoint="ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0" Jul 15 23:14:49.396222 containerd[1995]: 2025-07-15 23:14:49.324 [INFO][4948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-dn8dz" WorkloadEndpoint="ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5c6d9a04-36e2-4d83-83da-b77b4980d2aa", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef", Pod:"goldmane-58fd7646b9-dn8dz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali418c7ec1c05", MAC:"ce:85:34:14:5a:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:49.396363 containerd[1995]: 2025-07-15 23:14:49.361 [INFO][4948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-dn8dz" WorkloadEndpoint="ip--172--31--19--30-k8s-goldmane--58fd7646b9--dn8dz-eth0" Jul 15 23:14:49.517747 systemd-networkd[1867]: cali386cd9a76cc: Link UP Jul 15 23:14:49.521349 systemd-networkd[1867]: cali386cd9a76cc: Gained carrier Jul 15 23:14:49.522991 containerd[1995]: time="2025-07-15T23:14:49.521642746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j48z9,Uid:fe57ee22-2543-410d-ab8c-77d8463dc034,Namespace:calico-system,Attempt:0,} returns sandbox id \"18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b\"" Jul 15 23:14:49.594300 systemd[1]: Started cri-containerd-ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1.scope - libcontainer container ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1. Jul 15 23:14:49.602953 containerd[1995]: 2025-07-15 23:14:48.466 [INFO][4960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0 coredns-7c65d6cfc9- kube-system 6ea3fe6c-3568-40fc-bb4d-6e2955010c16 827 0 2025-07-15 23:14:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-30 coredns-7c65d6cfc9-4h2dn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali386cd9a76cc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4h2dn" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-" Jul 15 23:14:49.602953 containerd[1995]: 2025-07-15 23:14:48.468 [INFO][4960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4h2dn" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0" Jul 15 23:14:49.602953 containerd[1995]: 2025-07-15 23:14:48.702 [INFO][5014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" HandleID="k8s-pod-network.a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" Workload="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0" Jul 15 23:14:49.603374 containerd[1995]: 2025-07-15 23:14:48.703 [INFO][5014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" HandleID="k8s-pod-network.a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" Workload="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024be10), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-30", "pod":"coredns-7c65d6cfc9-4h2dn", "timestamp":"2025-07-15 23:14:48.70138537 +0000 UTC"}, Hostname:"ip-172-31-19-30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:49.603374 containerd[1995]: 2025-07-15 23:14:48.703 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:49.603374 containerd[1995]: 2025-07-15 23:14:49.197 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:49.603374 containerd[1995]: 2025-07-15 23:14:49.201 [INFO][5014] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-30' Jul 15 23:14:49.603374 containerd[1995]: 2025-07-15 23:14:49.288 [INFO][5014] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" host="ip-172-31-19-30" Jul 15 23:14:49.603374 containerd[1995]: 2025-07-15 23:14:49.321 [INFO][5014] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-30" Jul 15 23:14:49.603374 containerd[1995]: 2025-07-15 23:14:49.357 [INFO][5014] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:49.603374 containerd[1995]: 2025-07-15 23:14:49.367 [INFO][5014] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:49.603374 containerd[1995]: 2025-07-15 23:14:49.383 [INFO][5014] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:49.603921 containerd[1995]: 2025-07-15 23:14:49.388 [INFO][5014] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" host="ip-172-31-19-30" Jul 15 23:14:49.603921 containerd[1995]: 2025-07-15 23:14:49.396 [INFO][5014] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34 Jul 15 23:14:49.603921 containerd[1995]: 2025-07-15 23:14:49.414 [INFO][5014] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" host="ip-172-31-19-30" Jul 15 23:14:49.603921 containerd[1995]: 2025-07-15 23:14:49.451 [INFO][5014] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.198/26] block=192.168.109.192/26 handle="k8s-pod-network.a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" host="ip-172-31-19-30" Jul 15 23:14:49.603921 containerd[1995]: 2025-07-15 23:14:49.453 [INFO][5014] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.198/26] handle="k8s-pod-network.a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" host="ip-172-31-19-30" Jul 15 23:14:49.603921 containerd[1995]: 2025-07-15 23:14:49.453 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:49.603921 containerd[1995]: 2025-07-15 23:14:49.453 [INFO][5014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.198/26] IPv6=[] ContainerID="a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" HandleID="k8s-pod-network.a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" Workload="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0" Jul 15 23:14:49.604280 containerd[1995]: 2025-07-15 23:14:49.483 [INFO][4960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4h2dn" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6ea3fe6c-3568-40fc-bb4d-6e2955010c16", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"", Pod:"coredns-7c65d6cfc9-4h2dn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali386cd9a76cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:49.604280 containerd[1995]: 2025-07-15 23:14:49.487 [INFO][4960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.198/32] ContainerID="a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4h2dn" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0" Jul 15 23:14:49.604280 containerd[1995]: 2025-07-15 23:14:49.487 [INFO][4960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali386cd9a76cc ContainerID="a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4h2dn" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0" Jul 15 23:14:49.604280 containerd[1995]: 2025-07-15 23:14:49.528 [INFO][4960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4h2dn" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0" Jul 15 23:14:49.604280 containerd[1995]: 2025-07-15 23:14:49.529 [INFO][4960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4h2dn" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6ea3fe6c-3568-40fc-bb4d-6e2955010c16", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34", Pod:"coredns-7c65d6cfc9-4h2dn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali386cd9a76cc", MAC:"6e:54:1c:2d:4c:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:49.604280 containerd[1995]: 2025-07-15 23:14:49.575 [INFO][4960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4h2dn" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--4h2dn-eth0" Jul 15 23:14:49.652242 containerd[1995]: time="2025-07-15T23:14:49.652041286Z" level=info msg="connecting to shim 8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef" address="unix:///run/containerd/s/ecbbf65a0ae08cc8079335f3126745f226693af29da1b5fd94dd79913947ef99" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:49.816735 containerd[1995]: time="2025-07-15T23:14:49.814623503Z" level=info msg="connecting to shim a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34" address="unix:///run/containerd/s/a4440b8c567616298da6f9a708ba5816bdcb23d997b4d53512be6b97f4476259" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:49.890228 systemd[1]: Started cri-containerd-8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef.scope - libcontainer container 8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef. Jul 15 23:14:49.956196 systemd-networkd[1867]: cali9638e379b08: Gained IPv6LL Jul 15 23:14:50.105384 systemd[1]: Started cri-containerd-a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34.scope - libcontainer container a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34. Jul 15 23:14:50.154949 systemd-networkd[1867]: calid2ad852ccea: Link UP Jul 15 23:14:50.159367 systemd-networkd[1867]: calid2ad852ccea: Gained carrier Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:49.619 [INFO][5103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0 coredns-7c65d6cfc9- kube-system 375a0692-ef8f-4068-a807-b03338e74674 834 0 2025-07-15 23:14:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-30 coredns-7c65d6cfc9-brdvw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2ad852ccea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brdvw" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-" Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:49.621 [INFO][5103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brdvw" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0" Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:49.979 [INFO][5190] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" HandleID="k8s-pod-network.6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" Workload="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0" Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:49.979 [INFO][5190] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" HandleID="k8s-pod-network.6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" Workload="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006192f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-30", "pod":"coredns-7c65d6cfc9-brdvw", "timestamp":"2025-07-15 23:14:49.977353788 +0000 UTC"}, Hostname:"ip-172-31-19-30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:49.979 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:49.980 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:49.981 [INFO][5190] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-30' Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:50.001 [INFO][5190] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" host="ip-172-31-19-30" Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:50.019 [INFO][5190] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-30" Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:50.034 [INFO][5190] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:50.039 [INFO][5190] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:50.048 [INFO][5190] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:50.048 [INFO][5190] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" host="ip-172-31-19-30" Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:50.060 [INFO][5190] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:50.091 [INFO][5190] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" host="ip-172-31-19-30" Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:50.125 [INFO][5190] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.199/26] block=192.168.109.192/26 handle="k8s-pod-network.6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" host="ip-172-31-19-30" Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:50.126 [INFO][5190] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.199/26] handle="k8s-pod-network.6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" host="ip-172-31-19-30" Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:50.126 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:50.255366 containerd[1995]: 2025-07-15 23:14:50.126 [INFO][5190] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.199/26] IPv6=[] ContainerID="6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" HandleID="k8s-pod-network.6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" Workload="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0" Jul 15 23:14:50.263002 containerd[1995]: 2025-07-15 23:14:50.137 [INFO][5103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brdvw" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"375a0692-ef8f-4068-a807-b03338e74674", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"", Pod:"coredns-7c65d6cfc9-brdvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2ad852ccea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:50.263002 containerd[1995]: 2025-07-15 23:14:50.139 [INFO][5103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.199/32] ContainerID="6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brdvw" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0" Jul 15 23:14:50.263002 containerd[1995]: 2025-07-15 23:14:50.139 [INFO][5103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2ad852ccea ContainerID="6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brdvw" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0" Jul 15 23:14:50.263002 containerd[1995]: 2025-07-15 23:14:50.164 [INFO][5103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brdvw" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0" Jul 15 23:14:50.263002 containerd[1995]: 2025-07-15 23:14:50.167 [INFO][5103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brdvw" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"375a0692-ef8f-4068-a807-b03338e74674", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae", Pod:"coredns-7c65d6cfc9-brdvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2ad852ccea", MAC:"6a:c5:80:dd:80:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:50.263002 containerd[1995]: 2025-07-15 23:14:50.213 [INFO][5103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" Namespace="kube-system" Pod="coredns-7c65d6cfc9-brdvw" WorkloadEndpoint="ip--172--31--19--30-k8s-coredns--7c65d6cfc9--brdvw-eth0" Jul 15 23:14:50.263002 containerd[1995]: time="2025-07-15T23:14:50.255754329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c9578dc8-48pmd,Uid:0ba49da8-b5ae-49f6-a962-a399f396556d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1\"" Jul 15 23:14:50.338971 systemd-networkd[1867]: calia274ddf4bfa: Gained IPv6LL Jul 15 23:14:50.400715 containerd[1995]: time="2025-07-15T23:14:50.398701606Z" level=info msg="connecting to shim 6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae" address="unix:///run/containerd/s/78ace4790acfa202319d6ba38e6cda24a4cebb73ebb4e3a68ad0e717cce5edf0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:50.455645 systemd-networkd[1867]: califb7e7e8ccc4: Link UP Jul 15 23:14:50.462523 systemd-networkd[1867]: califb7e7e8ccc4: Gained carrier Jul 15 23:14:50.535346 containerd[1995]: time="2025-07-15T23:14:50.532730315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4h2dn,Uid:6ea3fe6c-3568-40fc-bb4d-6e2955010c16,Namespace:kube-system,Attempt:0,} returns sandbox id \"a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34\"" Jul 15 23:14:50.550721 containerd[1995]: time="2025-07-15T23:14:50.544996211Z" level=info msg="CreateContainer within sandbox \"a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:49.677 [INFO][5093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0 calico-kube-controllers-544bb4cb54- calico-system b34ce3a0-ef81-40df-83fb-eff1fce094a1 835 0 2025-07-15 23:14:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:544bb4cb54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-30 calico-kube-controllers-544bb4cb54-bn5qk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califb7e7e8ccc4 [] [] }} ContainerID="85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" Namespace="calico-system" Pod="calico-kube-controllers-544bb4cb54-bn5qk" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:49.678 [INFO][5093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" Namespace="calico-system" Pod="calico-kube-controllers-544bb4cb54-bn5qk" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.025 [INFO][5220] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" HandleID="k8s-pod-network.85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" Workload="ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.025 [INFO][5220] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" HandleID="k8s-pod-network.85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" Workload="ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034ece0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-30", "pod":"calico-kube-controllers-544bb4cb54-bn5qk", "timestamp":"2025-07-15 23:14:50.02500316 +0000 UTC"}, Hostname:"ip-172-31-19-30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.025 [INFO][5220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.126 [INFO][5220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.127 [INFO][5220] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-30' Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.173 [INFO][5220] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" host="ip-172-31-19-30" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.217 [INFO][5220] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-30" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.260 [INFO][5220] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.275 [INFO][5220] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.290 [INFO][5220] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-19-30" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.290 [INFO][5220] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" host="ip-172-31-19-30" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.297 [INFO][5220] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80 Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.318 [INFO][5220] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" host="ip-172-31-19-30" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.377 [INFO][5220] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.200/26] block=192.168.109.192/26 handle="k8s-pod-network.85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" host="ip-172-31-19-30" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.377 [INFO][5220] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.200/26] handle="k8s-pod-network.85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" host="ip-172-31-19-30" Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.379 [INFO][5220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:50.578991 containerd[1995]: 2025-07-15 23:14:50.384 [INFO][5220] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.200/26] IPv6=[] ContainerID="85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" HandleID="k8s-pod-network.85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" Workload="ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0" Jul 15 23:14:50.586727 containerd[1995]: 2025-07-15 23:14:50.434 [INFO][5093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" Namespace="calico-system" Pod="calico-kube-controllers-544bb4cb54-bn5qk" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0", GenerateName:"calico-kube-controllers-544bb4cb54-", Namespace:"calico-system", SelfLink:"", UID:"b34ce3a0-ef81-40df-83fb-eff1fce094a1", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"544bb4cb54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"", Pod:"calico-kube-controllers-544bb4cb54-bn5qk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb7e7e8ccc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:50.586727 containerd[1995]: 2025-07-15 23:14:50.435 [INFO][5093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.200/32] ContainerID="85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" Namespace="calico-system" Pod="calico-kube-controllers-544bb4cb54-bn5qk" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0" Jul 15 23:14:50.586727 containerd[1995]: 2025-07-15 23:14:50.435 [INFO][5093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb7e7e8ccc4 ContainerID="85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" Namespace="calico-system" Pod="calico-kube-controllers-544bb4cb54-bn5qk" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0" Jul 15 23:14:50.586727 containerd[1995]: 2025-07-15 23:14:50.487 [INFO][5093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" Namespace="calico-system" Pod="calico-kube-controllers-544bb4cb54-bn5qk" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0" Jul 15 23:14:50.586727 containerd[1995]: 2025-07-15 23:14:50.500 [INFO][5093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" Namespace="calico-system" Pod="calico-kube-controllers-544bb4cb54-bn5qk" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0", GenerateName:"calico-kube-controllers-544bb4cb54-", Namespace:"calico-system", SelfLink:"", UID:"b34ce3a0-ef81-40df-83fb-eff1fce094a1", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"544bb4cb54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-30", ContainerID:"85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80", Pod:"calico-kube-controllers-544bb4cb54-bn5qk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb7e7e8ccc4", MAC:"2e:c2:26:4e:b3:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:50.586727 containerd[1995]: 2025-07-15 23:14:50.528 [INFO][5093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" Namespace="calico-system" Pod="calico-kube-controllers-544bb4cb54-bn5qk" WorkloadEndpoint="ip--172--31--19--30-k8s-calico--kube--controllers--544bb4cb54--bn5qk-eth0" Jul 15 23:14:50.582912 systemd[1]: Started cri-containerd-6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae.scope - libcontainer container 6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae. Jul 15 23:14:50.639817 containerd[1995]: time="2025-07-15T23:14:50.638752151Z" level=info msg="Container f37ae6370c4c92b75f49bba329a9470fbbf5de4ad5d93d70c640c6046529fd86: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:50.666253 containerd[1995]: time="2025-07-15T23:14:50.666170724Z" level=info msg="CreateContainer within sandbox \"a399104639826473433382a3f7d81f65e9337398559979958d97641be881fa34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f37ae6370c4c92b75f49bba329a9470fbbf5de4ad5d93d70c640c6046529fd86\"" Jul 15 23:14:50.668945 containerd[1995]: time="2025-07-15T23:14:50.668874600Z" level=info msg="StartContainer for \"f37ae6370c4c92b75f49bba329a9470fbbf5de4ad5d93d70c640c6046529fd86\"" Jul 15 23:14:50.691475 containerd[1995]: time="2025-07-15T23:14:50.691385124Z" level=info msg="connecting to shim f37ae6370c4c92b75f49bba329a9470fbbf5de4ad5d93d70c640c6046529fd86" address="unix:///run/containerd/s/a4440b8c567616298da6f9a708ba5816bdcb23d997b4d53512be6b97f4476259" protocol=ttrpc version=3 Jul 15 23:14:50.695651 containerd[1995]: time="2025-07-15T23:14:50.695595300Z" level=info msg="connecting to shim 85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80" address="unix:///run/containerd/s/1f6375cfca4eebd0bfda41f9fbb54b111f9d010514dd1892593c58f27e8e465a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:50.788390 systemd-networkd[1867]: cali386cd9a76cc: Gained IPv6LL Jul 15 23:14:50.828933 systemd[1]: Started cri-containerd-85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80.scope - libcontainer container 85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80. Jul 15 23:14:50.838271 systemd[1]: Started cri-containerd-f37ae6370c4c92b75f49bba329a9470fbbf5de4ad5d93d70c640c6046529fd86.scope - libcontainer container f37ae6370c4c92b75f49bba329a9470fbbf5de4ad5d93d70c640c6046529fd86. Jul 15 23:14:50.939321 containerd[1995]: time="2025-07-15T23:14:50.939164713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-brdvw,Uid:375a0692-ef8f-4068-a807-b03338e74674,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae\"" Jul 15 23:14:50.950027 containerd[1995]: time="2025-07-15T23:14:50.949939405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-dn8dz,Uid:5c6d9a04-36e2-4d83-83da-b77b4980d2aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef\"" Jul 15 23:14:50.956637 containerd[1995]: time="2025-07-15T23:14:50.955604665Z" level=info msg="CreateContainer within sandbox \"6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:14:51.031037 containerd[1995]: time="2025-07-15T23:14:51.030968289Z" level=info msg="Container 59e2aa704c5417dbb1b92af47fd58f36bf04bcedc4cd4a5edb43ce64a4a1af24: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:51.031832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3713767630.mount: Deactivated successfully. Jul 15 23:14:51.043547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150194194.mount: Deactivated successfully. Jul 15 23:14:51.083123 containerd[1995]: time="2025-07-15T23:14:51.083051626Z" level=info msg="StartContainer for \"f37ae6370c4c92b75f49bba329a9470fbbf5de4ad5d93d70c640c6046529fd86\" returns successfully" Jul 15 23:14:51.084629 containerd[1995]: time="2025-07-15T23:14:51.084431866Z" level=info msg="CreateContainer within sandbox \"6a901ee521651e8f7c1a8de5369b7f47abdd94830faa2eef15161e89c48f78ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59e2aa704c5417dbb1b92af47fd58f36bf04bcedc4cd4a5edb43ce64a4a1af24\"" Jul 15 23:14:51.086437 containerd[1995]: time="2025-07-15T23:14:51.086363482Z" level=info msg="StartContainer for \"59e2aa704c5417dbb1b92af47fd58f36bf04bcedc4cd4a5edb43ce64a4a1af24\"" Jul 15 23:14:51.093282 containerd[1995]: time="2025-07-15T23:14:51.092967478Z" level=info msg="connecting to shim 59e2aa704c5417dbb1b92af47fd58f36bf04bcedc4cd4a5edb43ce64a4a1af24" address="unix:///run/containerd/s/78ace4790acfa202319d6ba38e6cda24a4cebb73ebb4e3a68ad0e717cce5edf0" protocol=ttrpc version=3 Jul 15 23:14:51.209382 systemd[1]: Started cri-containerd-59e2aa704c5417dbb1b92af47fd58f36bf04bcedc4cd4a5edb43ce64a4a1af24.scope - libcontainer container 59e2aa704c5417dbb1b92af47fd58f36bf04bcedc4cd4a5edb43ce64a4a1af24. Jul 15 23:14:51.299238 systemd-networkd[1867]: cali418c7ec1c05: Gained IPv6LL Jul 15 23:14:51.424037 containerd[1995]: time="2025-07-15T23:14:51.423721907Z" level=info msg="StartContainer for \"59e2aa704c5417dbb1b92af47fd58f36bf04bcedc4cd4a5edb43ce64a4a1af24\" returns successfully" Jul 15 23:14:51.524634 containerd[1995]: time="2025-07-15T23:14:51.524446896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-544bb4cb54-bn5qk,Uid:b34ce3a0-ef81-40df-83fb-eff1fce094a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80\"" Jul 15 23:14:51.683050 systemd-networkd[1867]: calid2ad852ccea: Gained IPv6LL Jul 15 23:14:51.778504 kubelet[3284]: I0715 23:14:51.777806 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-brdvw" podStartSLOduration=51.777781873 podStartE2EDuration="51.777781873s" podCreationTimestamp="2025-07-15 23:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:14:51.718195285 +0000 UTC m=+55.783917170" watchObservedRunningTime="2025-07-15 23:14:51.777781873 +0000 UTC m=+55.843503710" Jul 15 23:14:51.781918 kubelet[3284]: I0715 23:14:51.780490 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4h2dn" podStartSLOduration=51.780468025 podStartE2EDuration="51.780468025s" podCreationTimestamp="2025-07-15 23:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:14:51.779828569 +0000 UTC m=+55.845550394" watchObservedRunningTime="2025-07-15 23:14:51.780468025 +0000 UTC m=+55.846189862" Jul 15 23:14:52.067467 systemd-networkd[1867]: califb7e7e8ccc4: Gained IPv6LL Jul 15 23:14:53.175506 containerd[1995]: time="2025-07-15T23:14:53.175426152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:53.177427 containerd[1995]: time="2025-07-15T23:14:53.177304500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 15 23:14:53.181378 containerd[1995]: time="2025-07-15T23:14:53.181310988Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:53.188261 containerd[1995]: time="2025-07-15T23:14:53.188094540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:53.190812 containerd[1995]: time="2025-07-15T23:14:53.190739064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 5.298167954s" Jul 15 23:14:53.191004 containerd[1995]: time="2025-07-15T23:14:53.190970976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 23:14:53.193404 containerd[1995]: time="2025-07-15T23:14:53.193339176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 23:14:53.199231 containerd[1995]: time="2025-07-15T23:14:53.199175616Z" level=info msg="CreateContainer within sandbox \"cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:14:53.226708 containerd[1995]: time="2025-07-15T23:14:53.226624752Z" level=info msg="Container f685ef8e05f1082c9bbd25435597896779a38a876204e12c187d6ce445872d1c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:53.252245 containerd[1995]: time="2025-07-15T23:14:53.252046776Z" level=info msg="CreateContainer within sandbox \"cc37ecdf25e9bec8deabd2ccf868c6733ba26cb6ccd647f2369f046b86e16a28\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f685ef8e05f1082c9bbd25435597896779a38a876204e12c187d6ce445872d1c\"" Jul 15 23:14:53.255509 containerd[1995]: time="2025-07-15T23:14:53.255428400Z" level=info msg="StartContainer for \"f685ef8e05f1082c9bbd25435597896779a38a876204e12c187d6ce445872d1c\"" Jul 15 23:14:53.258205 containerd[1995]: time="2025-07-15T23:14:53.258106608Z" level=info msg="connecting to shim f685ef8e05f1082c9bbd25435597896779a38a876204e12c187d6ce445872d1c" address="unix:///run/containerd/s/c740cf83b517cfdbd4a0819ebeaa179d31eab90a57083fdedf08ed7af36406c4" protocol=ttrpc version=3 Jul 15 23:14:53.304869 systemd[1]: Started cri-containerd-f685ef8e05f1082c9bbd25435597896779a38a876204e12c187d6ce445872d1c.scope - libcontainer container f685ef8e05f1082c9bbd25435597896779a38a876204e12c187d6ce445872d1c. Jul 15 23:14:53.398782 containerd[1995]: time="2025-07-15T23:14:53.398642461Z" level=info msg="StartContainer for \"f685ef8e05f1082c9bbd25435597896779a38a876204e12c187d6ce445872d1c\" returns successfully" Jul 15 23:14:53.715887 kubelet[3284]: I0715 23:14:53.715791 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79c9578dc8-lrwvt" podStartSLOduration=34.5218658 podStartE2EDuration="40.715743063s" podCreationTimestamp="2025-07-15 23:14:13 +0000 UTC" firstStartedPulling="2025-07-15 23:14:46.999202545 +0000 UTC m=+51.064924370" lastFinishedPulling="2025-07-15 23:14:53.193079796 +0000 UTC m=+57.258801633" observedRunningTime="2025-07-15 23:14:53.713929803 +0000 UTC m=+57.779651676" watchObservedRunningTime="2025-07-15 23:14:53.715743063 +0000 UTC m=+57.781464900" Jul 15 23:14:54.626468 containerd[1995]: time="2025-07-15T23:14:54.626396115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:54.629093 containerd[1995]: time="2025-07-15T23:14:54.629033607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 15 23:14:54.631572 containerd[1995]: time="2025-07-15T23:14:54.631164915Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:54.637255 containerd[1995]: time="2025-07-15T23:14:54.637188651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:54.640006 containerd[1995]: time="2025-07-15T23:14:54.639937059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.446534403s" Jul 15 23:14:54.640006 containerd[1995]: time="2025-07-15T23:14:54.640002783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 15 23:14:54.642686 containerd[1995]: time="2025-07-15T23:14:54.641832063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:14:54.646894 containerd[1995]: time="2025-07-15T23:14:54.646714335Z" level=info msg="CreateContainer within sandbox \"18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 23:14:54.678592 containerd[1995]: time="2025-07-15T23:14:54.676885491Z" level=info msg="Container e41450217c64f114cea408bd4425b50142d99403a396acb5e7b17e22ef49bd25: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:54.696583 kubelet[3284]: I0715 23:14:54.694377 3284 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:14:54.696917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078654121.mount: Deactivated successfully. Jul 15 23:14:54.705486 containerd[1995]: time="2025-07-15T23:14:54.705396796Z" level=info msg="CreateContainer within sandbox \"18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e41450217c64f114cea408bd4425b50142d99403a396acb5e7b17e22ef49bd25\"" Jul 15 23:14:54.707349 containerd[1995]: time="2025-07-15T23:14:54.707167504Z" level=info msg="StartContainer for \"e41450217c64f114cea408bd4425b50142d99403a396acb5e7b17e22ef49bd25\"" Jul 15 23:14:54.715608 containerd[1995]: time="2025-07-15T23:14:54.715273456Z" level=info msg="connecting to shim e41450217c64f114cea408bd4425b50142d99403a396acb5e7b17e22ef49bd25" address="unix:///run/containerd/s/e6b41c41c50bed968cbe6c7b197e19ca22484bcc9ddd9c5f28896d4306558b25" protocol=ttrpc version=3 Jul 15 23:14:54.766996 systemd[1]: Started cri-containerd-e41450217c64f114cea408bd4425b50142d99403a396acb5e7b17e22ef49bd25.scope - libcontainer container e41450217c64f114cea408bd4425b50142d99403a396acb5e7b17e22ef49bd25. Jul 15 23:14:54.882206 ntpd[1969]: Listen normally on 8 vxlan.calico 192.168.109.192:123 Jul 15 23:14:54.883195 ntpd[1969]: 15 Jul 23:14:54 ntpd[1969]: Listen normally on 8 vxlan.calico 192.168.109.192:123 Jul 15 23:14:54.883195 ntpd[1969]: 15 Jul 23:14:54 ntpd[1969]: Listen normally on 9 calid0e6abb05b8 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 23:14:54.883195 ntpd[1969]: 15 Jul 23:14:54 ntpd[1969]: Listen normally on 10 vxlan.calico [fe80::647e:c9ff:fe08:14db%5]:123 Jul 15 23:14:54.883195 ntpd[1969]: 15 Jul 23:14:54 ntpd[1969]: Listen normally on 11 cali62da5198428 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 23:14:54.882335 ntpd[1969]: Listen normally on 9 calid0e6abb05b8 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 23:14:54.885324 containerd[1995]: time="2025-07-15T23:14:54.884430184Z" level=info msg="StartContainer for \"e41450217c64f114cea408bd4425b50142d99403a396acb5e7b17e22ef49bd25\" returns successfully" Jul 15 23:14:54.885403 ntpd[1969]: 15 Jul 23:14:54 ntpd[1969]: Listen normally on 12 cali9638e379b08 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 23:14:54.885403 ntpd[1969]: 15 Jul 23:14:54 ntpd[1969]: Listen normally on 13 calia274ddf4bfa [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 23:14:54.885403 ntpd[1969]: 15 Jul 23:14:54 ntpd[1969]: Listen normally on 14 cali418c7ec1c05 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 23:14:54.885403 ntpd[1969]: 15 Jul 23:14:54 ntpd[1969]: Listen normally on 15 cali386cd9a76cc [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 23:14:54.885403 ntpd[1969]: 15 Jul 23:14:54 ntpd[1969]: Listen normally on 16 calid2ad852ccea [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 23:14:54.885403 ntpd[1969]: 15 Jul 23:14:54 ntpd[1969]: Listen normally on 17 califb7e7e8ccc4 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 23:14:54.882416 ntpd[1969]: Listen normally on 10 vxlan.calico [fe80::647e:c9ff:fe08:14db%5]:123 Jul 15 23:14:54.882484 ntpd[1969]: Listen normally on 11 cali62da5198428 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 23:14:54.884456 ntpd[1969]: Listen normally on 12 cali9638e379b08 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 23:14:54.884655 ntpd[1969]: Listen normally on 13 calia274ddf4bfa [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 23:14:54.884732 ntpd[1969]: Listen normally on 14 cali418c7ec1c05 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 23:14:54.884799 ntpd[1969]: Listen normally on 15 cali386cd9a76cc [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 23:14:54.884870 ntpd[1969]: Listen normally on 16 calid2ad852ccea [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 23:14:54.884936 ntpd[1969]: Listen normally on 17 califb7e7e8ccc4 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 23:14:54.970328 containerd[1995]: time="2025-07-15T23:14:54.970230269Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:54.978936 containerd[1995]: time="2025-07-15T23:14:54.976727357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 23:14:54.990660 containerd[1995]: time="2025-07-15T23:14:54.990596513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 348.695666ms" Jul 15 23:14:54.990943 containerd[1995]: time="2025-07-15T23:14:54.990906593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 23:14:54.993367 containerd[1995]: time="2025-07-15T23:14:54.993300437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 23:14:54.995879 containerd[1995]: time="2025-07-15T23:14:54.995821253Z" level=info msg="CreateContainer within sandbox \"ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:14:55.018299 containerd[1995]: time="2025-07-15T23:14:55.015917773Z" level=info msg="Container 2ff36d1f54c75872d4eff03611c317e420199489a3602e9468e7251fdbe9d375: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:55.044236 containerd[1995]: time="2025-07-15T23:14:55.044172061Z" level=info msg="CreateContainer within sandbox \"ce3d0640ad41f68282f9ca5a0a83b2bf5fbcb676b62cf26d1beac6ee4ad833a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2ff36d1f54c75872d4eff03611c317e420199489a3602e9468e7251fdbe9d375\"" Jul 15 23:14:55.047275 containerd[1995]: time="2025-07-15T23:14:55.047216329Z" level=info msg="StartContainer for \"2ff36d1f54c75872d4eff03611c317e420199489a3602e9468e7251fdbe9d375\"" Jul 15 23:14:55.053467 containerd[1995]: time="2025-07-15T23:14:55.053397301Z" level=info msg="connecting to shim 2ff36d1f54c75872d4eff03611c317e420199489a3602e9468e7251fdbe9d375" address="unix:///run/containerd/s/7e47b42da614a5911e1cef5d34867a7867f77eab319f33963d47d45847caa033" protocol=ttrpc version=3 Jul 15 23:14:55.096056 systemd[1]: Started cri-containerd-2ff36d1f54c75872d4eff03611c317e420199489a3602e9468e7251fdbe9d375.scope - libcontainer container 2ff36d1f54c75872d4eff03611c317e420199489a3602e9468e7251fdbe9d375. Jul 15 23:14:55.275590 containerd[1995]: time="2025-07-15T23:14:55.275499794Z" level=info msg="StartContainer for \"2ff36d1f54c75872d4eff03611c317e420199489a3602e9468e7251fdbe9d375\" returns successfully" Jul 15 23:14:56.723584 kubelet[3284]: I0715 23:14:56.723029 3284 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:14:57.864327 kubelet[3284]: I0715 23:14:57.863705 3284 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:14:57.941181 kubelet[3284]: I0715 23:14:57.940911 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79c9578dc8-48pmd" podStartSLOduration=40.209921605 podStartE2EDuration="44.940885412s" podCreationTimestamp="2025-07-15 23:14:13 +0000 UTC" firstStartedPulling="2025-07-15 23:14:50.26210833 +0000 UTC m=+54.327830167" lastFinishedPulling="2025-07-15 23:14:54.993072137 +0000 UTC m=+59.058793974" observedRunningTime="2025-07-15 23:14:55.738625637 +0000 UTC m=+59.804347462" watchObservedRunningTime="2025-07-15 23:14:57.940885412 +0000 UTC m=+62.006607237" Jul 15 23:14:58.054041 systemd[1]: Started sshd@7-172.31.19.30:22-139.178.89.65:49760.service - OpenSSH per-connection server daemon (139.178.89.65:49760). Jul 15 23:14:58.294858 sshd[5627]: Accepted publickey for core from 139.178.89.65 port 49760 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:58.313713 sshd-session[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:58.335369 systemd-logind[1974]: New session 8 of user core. Jul 15 23:14:58.342923 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:14:58.902045 sshd[5629]: Connection closed by 139.178.89.65 port 49760 Jul 15 23:14:58.902526 sshd-session[5627]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:58.920069 systemd-logind[1974]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:14:58.922153 systemd[1]: sshd@7-172.31.19.30:22-139.178.89.65:49760.service: Deactivated successfully. Jul 15 23:14:58.930818 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:14:58.939454 systemd-logind[1974]: Removed session 8. Jul 15 23:15:00.191295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627530535.mount: Deactivated successfully. Jul 15 23:15:01.431649 containerd[1995]: time="2025-07-15T23:15:01.431379021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:01.433894 containerd[1995]: time="2025-07-15T23:15:01.433827681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 15 23:15:01.436542 containerd[1995]: time="2025-07-15T23:15:01.436135317Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:01.443628 containerd[1995]: time="2025-07-15T23:15:01.443532753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:01.445327 containerd[1995]: time="2025-07-15T23:15:01.445258185Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 6.451892672s" Jul 15 23:15:01.445327 containerd[1995]: time="2025-07-15T23:15:01.445327005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 15 23:15:01.449122 containerd[1995]: time="2025-07-15T23:15:01.448951341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 23:15:01.450933 containerd[1995]: time="2025-07-15T23:15:01.450343137Z" level=info msg="CreateContainer within sandbox \"8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 23:15:01.473538 containerd[1995]: time="2025-07-15T23:15:01.473467665Z" level=info msg="Container 2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:01.507803 containerd[1995]: time="2025-07-15T23:15:01.507738969Z" level=info msg="CreateContainer within sandbox \"8443abccea69089ef3acf62ae52e64fb8ee488542b66aea98e865f1efe47d7ef\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215\"" Jul 15 23:15:01.509581 containerd[1995]: time="2025-07-15T23:15:01.509430861Z" level=info msg="StartContainer for \"2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215\"" Jul 15 23:15:01.514085 containerd[1995]: time="2025-07-15T23:15:01.513942285Z" level=info msg="connecting to shim 2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215" address="unix:///run/containerd/s/ecbbf65a0ae08cc8079335f3126745f226693af29da1b5fd94dd79913947ef99" protocol=ttrpc version=3 Jul 15 23:15:01.592194 systemd[1]: Started cri-containerd-2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215.scope - libcontainer container 2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215. Jul 15 23:15:01.743248 containerd[1995]: time="2025-07-15T23:15:01.742970111Z" level=info msg="StartContainer for \"2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215\" returns successfully" Jul 15 23:15:02.025675 containerd[1995]: time="2025-07-15T23:15:02.025609712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215\" id:\"437165593a10e36468c57c404b786c68f5799fab9172f30b05c9a20d1c3b292e\" pid:5702 exit_status:1 exited_at:{seconds:1752621302 nanos:24757688}" Jul 15 23:15:02.925340 containerd[1995]: time="2025-07-15T23:15:02.925274052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215\" id:\"e9c3d3b297d699a71108a9bb7d64bb5490a8051b8f60ad97000551e1faa197b2\" pid:5728 exit_status:1 exited_at:{seconds:1752621302 nanos:924758016}" Jul 15 23:15:03.447849 containerd[1995]: time="2025-07-15T23:15:03.447791171Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215\" id:\"35272a15e9fa45df4840d9428154779c0b0c1b8b7b3607c655f667bd31088c4b\" pid:5751 exited_at:{seconds:1752621303 nanos:445880291}" Jul 15 23:15:03.950062 systemd[1]: Started sshd@8-172.31.19.30:22-139.178.89.65:49082.service - OpenSSH per-connection server daemon (139.178.89.65:49082). Jul 15 23:15:04.063282 containerd[1995]: time="2025-07-15T23:15:04.063148990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215\" id:\"6735c9e6a535af0d9b572d6516bfbf27a0875384c81bb420e7f41224180882a7\" pid:5780 exit_status:1 exited_at:{seconds:1752621304 nanos:61432066}" Jul 15 23:15:04.209679 sshd[5791]: Accepted publickey for core from 139.178.89.65 port 49082 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:04.214943 sshd-session[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:04.236513 systemd-logind[1974]: New session 9 of user core. Jul 15 23:15:04.242108 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:15:04.666604 sshd[5793]: Connection closed by 139.178.89.65 port 49082 Jul 15 23:15:04.667632 sshd-session[5791]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:04.679600 systemd[1]: sshd@8-172.31.19.30:22-139.178.89.65:49082.service: Deactivated successfully. Jul 15 23:15:04.685738 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:15:04.690887 systemd-logind[1974]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:15:04.698620 systemd-logind[1974]: Removed session 9. Jul 15 23:15:05.154930 containerd[1995]: time="2025-07-15T23:15:05.154760507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2\" id:\"600b59ee8fae6b08fbbb2f44fe44651f8e76e72b0a0d37170b3a80737dee855b\" pid:5818 exited_at:{seconds:1752621305 nanos:153854243}" Jul 15 23:15:05.230007 kubelet[3284]: I0715 23:15:05.229882 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-dn8dz" podStartSLOduration=30.746268808 podStartE2EDuration="41.22985616s" podCreationTimestamp="2025-07-15 23:14:24 +0000 UTC" firstStartedPulling="2025-07-15 23:14:50.964150609 +0000 UTC m=+55.029872446" lastFinishedPulling="2025-07-15 23:15:01.447737949 +0000 UTC m=+65.513459798" observedRunningTime="2025-07-15 23:15:01.805642367 +0000 UTC m=+65.871364348" watchObservedRunningTime="2025-07-15 23:15:05.22985616 +0000 UTC m=+69.295578009" Jul 15 23:15:05.462593 containerd[1995]: time="2025-07-15T23:15:05.462327361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215\" id:\"6c116e49f1040b863ee80b483d47cd179bf9fb5e056b8f169d4fc9366f587798\" pid:5842 exit_status:1 exited_at:{seconds:1752621305 nanos:461183149}" Jul 15 23:15:05.733696 containerd[1995]: time="2025-07-15T23:15:05.733087298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:05.736335 containerd[1995]: time="2025-07-15T23:15:05.736286306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 15 23:15:05.739037 containerd[1995]: time="2025-07-15T23:15:05.738993578Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:05.743744 containerd[1995]: time="2025-07-15T23:15:05.743690390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:05.745208 containerd[1995]: time="2025-07-15T23:15:05.744949454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 4.295939913s" Jul 15 23:15:05.745208 containerd[1995]: time="2025-07-15T23:15:05.745008938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 15 23:15:05.747703 containerd[1995]: time="2025-07-15T23:15:05.747631418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 23:15:05.789977 containerd[1995]: time="2025-07-15T23:15:05.789303291Z" level=info msg="CreateContainer within sandbox \"85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 23:15:05.811075 containerd[1995]: time="2025-07-15T23:15:05.810998235Z" level=info msg="Container 43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:05.826179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1743704583.mount: Deactivated successfully. Jul 15 23:15:05.836037 containerd[1995]: time="2025-07-15T23:15:05.835965987Z" level=info msg="CreateContainer within sandbox \"85d6823e3a6754aeb3861968636215d1bf911dd76ba44a0d68b1ccb106597c80\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7\"" Jul 15 23:15:05.837320 containerd[1995]: time="2025-07-15T23:15:05.837170379Z" level=info msg="StartContainer for \"43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7\"" Jul 15 23:15:05.842075 containerd[1995]: time="2025-07-15T23:15:05.841977219Z" level=info msg="connecting to shim 43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7" address="unix:///run/containerd/s/1f6375cfca4eebd0bfda41f9fbb54b111f9d010514dd1892593c58f27e8e465a" protocol=ttrpc version=3 Jul 15 23:15:05.891874 systemd[1]: Started cri-containerd-43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7.scope - libcontainer container 43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7. Jul 15 23:15:05.981063 containerd[1995]: time="2025-07-15T23:15:05.981017392Z" level=info msg="StartContainer for \"43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7\" returns successfully" Jul 15 23:15:06.832775 kubelet[3284]: I0715 23:15:06.831150 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-544bb4cb54-bn5qk" podStartSLOduration=28.614931794 podStartE2EDuration="42.83111956s" podCreationTimestamp="2025-07-15 23:14:24 +0000 UTC" firstStartedPulling="2025-07-15 23:14:51.530803164 +0000 UTC m=+55.596524989" lastFinishedPulling="2025-07-15 23:15:05.746990834 +0000 UTC m=+69.812712755" observedRunningTime="2025-07-15 23:15:06.830578084 +0000 UTC m=+70.896299957" watchObservedRunningTime="2025-07-15 23:15:06.83111956 +0000 UTC m=+70.896841661" Jul 15 23:15:06.903206 containerd[1995]: time="2025-07-15T23:15:06.903137008Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7\" id:\"9e33c9ba718062f6493b14478c7de581622f073524c86dedefde9c09e79ab268\" pid:5914 exited_at:{seconds:1752621306 nanos:902317852}" Jul 15 23:15:07.639014 containerd[1995]: time="2025-07-15T23:15:07.638914396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:07.640845 containerd[1995]: time="2025-07-15T23:15:07.640790164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 15 23:15:07.641479 containerd[1995]: time="2025-07-15T23:15:07.641384020Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:07.647918 containerd[1995]: time="2025-07-15T23:15:07.647784676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:15:07.650259 containerd[1995]: time="2025-07-15T23:15:07.650062912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.902363754s" Jul 15 23:15:07.650259 containerd[1995]: time="2025-07-15T23:15:07.650128096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 15 23:15:07.657151 containerd[1995]: time="2025-07-15T23:15:07.656950996Z" level=info msg="CreateContainer within sandbox \"18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 23:15:07.678197 containerd[1995]: time="2025-07-15T23:15:07.678076888Z" level=info msg="Container ceee8bf6342162f086171c17c1fcf802ccf3693d0fd5ef322781e083255af115: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:07.697613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount295602039.mount: Deactivated successfully. Jul 15 23:15:07.705847 containerd[1995]: time="2025-07-15T23:15:07.705760516Z" level=info msg="CreateContainer within sandbox \"18037455f6fee87b242875cb95abf4f2214fcd4069a9fba8a6aa16d35d52ac8b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ceee8bf6342162f086171c17c1fcf802ccf3693d0fd5ef322781e083255af115\"" Jul 15 23:15:07.709632 containerd[1995]: time="2025-07-15T23:15:07.708351616Z" level=info msg="StartContainer for \"ceee8bf6342162f086171c17c1fcf802ccf3693d0fd5ef322781e083255af115\"" Jul 15 23:15:07.712248 containerd[1995]: time="2025-07-15T23:15:07.712065412Z" level=info msg="connecting to shim ceee8bf6342162f086171c17c1fcf802ccf3693d0fd5ef322781e083255af115" address="unix:///run/containerd/s/e6b41c41c50bed968cbe6c7b197e19ca22484bcc9ddd9c5f28896d4306558b25" protocol=ttrpc version=3 Jul 15 23:15:07.764259 systemd[1]: Started cri-containerd-ceee8bf6342162f086171c17c1fcf802ccf3693d0fd5ef322781e083255af115.scope - libcontainer container ceee8bf6342162f086171c17c1fcf802ccf3693d0fd5ef322781e083255af115. Jul 15 23:15:07.899308 containerd[1995]: time="2025-07-15T23:15:07.899108453Z" level=info msg="StartContainer for \"ceee8bf6342162f086171c17c1fcf802ccf3693d0fd5ef322781e083255af115\" returns successfully" Jul 15 23:15:08.409440 kubelet[3284]: I0715 23:15:08.409354 3284 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 23:15:08.409440 kubelet[3284]: I0715 23:15:08.409437 3284 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 23:15:08.864342 kubelet[3284]: I0715 23:15:08.862929 3284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j48z9" podStartSLOduration=26.75388086 podStartE2EDuration="44.862902486s" podCreationTimestamp="2025-07-15 23:14:24 +0000 UTC" firstStartedPulling="2025-07-15 23:14:49.542846686 +0000 UTC m=+53.608568523" lastFinishedPulling="2025-07-15 23:15:07.651868324 +0000 UTC m=+71.717590149" observedRunningTime="2025-07-15 23:15:08.861088662 +0000 UTC m=+72.926810523" watchObservedRunningTime="2025-07-15 23:15:08.862902486 +0000 UTC m=+72.928624311" Jul 15 23:15:09.711277 systemd[1]: Started sshd@9-172.31.19.30:22-139.178.89.65:50812.service - OpenSSH per-connection server daemon (139.178.89.65:50812). Jul 15 23:15:09.938639 sshd[5958]: Accepted publickey for core from 139.178.89.65 port 50812 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:09.942384 sshd-session[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:09.954229 systemd-logind[1974]: New session 10 of user core. Jul 15 23:15:09.961900 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:15:10.252490 sshd[5960]: Connection closed by 139.178.89.65 port 50812 Jul 15 23:15:10.253318 sshd-session[5958]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:10.263581 systemd[1]: sshd@9-172.31.19.30:22-139.178.89.65:50812.service: Deactivated successfully. Jul 15 23:15:10.264797 systemd-logind[1974]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:15:10.271677 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:15:10.298144 systemd-logind[1974]: Removed session 10. Jul 15 23:15:10.302046 systemd[1]: Started sshd@10-172.31.19.30:22-139.178.89.65:50828.service - OpenSSH per-connection server daemon (139.178.89.65:50828). Jul 15 23:15:10.513939 sshd[5973]: Accepted publickey for core from 139.178.89.65 port 50828 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:10.516076 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:10.528054 systemd-logind[1974]: New session 11 of user core. Jul 15 23:15:10.534133 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:15:10.908824 sshd[5975]: Connection closed by 139.178.89.65 port 50828 Jul 15 23:15:10.910500 sshd-session[5973]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:10.924487 systemd[1]: sshd@10-172.31.19.30:22-139.178.89.65:50828.service: Deactivated successfully. Jul 15 23:15:10.932850 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:15:10.943519 systemd-logind[1974]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:15:10.974081 systemd[1]: Started sshd@11-172.31.19.30:22-139.178.89.65:50844.service - OpenSSH per-connection server daemon (139.178.89.65:50844). Jul 15 23:15:10.980743 systemd-logind[1974]: Removed session 11. Jul 15 23:15:11.181356 sshd[5985]: Accepted publickey for core from 139.178.89.65 port 50844 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:11.185714 sshd-session[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:11.197839 systemd-logind[1974]: New session 12 of user core. Jul 15 23:15:11.205963 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:15:11.480041 sshd[5987]: Connection closed by 139.178.89.65 port 50844 Jul 15 23:15:11.481185 sshd-session[5985]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:11.489425 systemd[1]: sshd@11-172.31.19.30:22-139.178.89.65:50844.service: Deactivated successfully. Jul 15 23:15:11.494608 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:15:11.498230 systemd-logind[1974]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:15:11.502682 systemd-logind[1974]: Removed session 12. Jul 15 23:15:16.520136 systemd[1]: Started sshd@12-172.31.19.30:22-139.178.89.65:50846.service - OpenSSH per-connection server daemon (139.178.89.65:50846). Jul 15 23:15:16.729335 sshd[6006]: Accepted publickey for core from 139.178.89.65 port 50846 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:16.732218 sshd-session[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:16.741890 systemd-logind[1974]: New session 13 of user core. Jul 15 23:15:16.750079 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:15:17.063064 sshd[6008]: Connection closed by 139.178.89.65 port 50846 Jul 15 23:15:17.064332 sshd-session[6006]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:17.076321 systemd[1]: sshd@12-172.31.19.30:22-139.178.89.65:50846.service: Deactivated successfully. Jul 15 23:15:17.086959 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:15:17.092616 systemd-logind[1974]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:15:17.096879 systemd-logind[1974]: Removed session 13. Jul 15 23:15:19.005622 kubelet[3284]: I0715 23:15:19.005338 3284 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:15:20.921085 containerd[1995]: time="2025-07-15T23:15:20.920990826Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7\" id:\"20553c94b41b97cb75abd1b6d07faa003afbcbb56a217db3f91999b9a48b7d89\" pid:6036 exited_at:{seconds:1752621320 nanos:920038374}" Jul 15 23:15:22.107988 systemd[1]: Started sshd@13-172.31.19.30:22-139.178.89.65:54248.service - OpenSSH per-connection server daemon (139.178.89.65:54248). Jul 15 23:15:22.328137 sshd[6047]: Accepted publickey for core from 139.178.89.65 port 54248 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:22.330971 sshd-session[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:22.339270 systemd-logind[1974]: New session 14 of user core. Jul 15 23:15:22.346105 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:15:22.628081 sshd[6049]: Connection closed by 139.178.89.65 port 54248 Jul 15 23:15:22.628727 sshd-session[6047]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:22.640016 systemd[1]: sshd@13-172.31.19.30:22-139.178.89.65:54248.service: Deactivated successfully. Jul 15 23:15:22.648202 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:15:22.653685 systemd-logind[1974]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:15:22.658340 systemd-logind[1974]: Removed session 14. Jul 15 23:15:27.667537 systemd[1]: Started sshd@14-172.31.19.30:22-139.178.89.65:54260.service - OpenSSH per-connection server daemon (139.178.89.65:54260). Jul 15 23:15:27.891793 sshd[6069]: Accepted publickey for core from 139.178.89.65 port 54260 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:27.895183 sshd-session[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:27.904775 systemd-logind[1974]: New session 15 of user core. Jul 15 23:15:27.920909 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:15:28.207641 sshd[6071]: Connection closed by 139.178.89.65 port 54260 Jul 15 23:15:28.208700 sshd-session[6069]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:28.217082 systemd[1]: sshd@14-172.31.19.30:22-139.178.89.65:54260.service: Deactivated successfully. Jul 15 23:15:28.222341 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:15:28.224928 systemd-logind[1974]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:15:28.228160 systemd-logind[1974]: Removed session 15. Jul 15 23:15:33.248997 systemd[1]: Started sshd@15-172.31.19.30:22-139.178.89.65:45758.service - OpenSSH per-connection server daemon (139.178.89.65:45758). Jul 15 23:15:33.453523 sshd[6087]: Accepted publickey for core from 139.178.89.65 port 45758 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:33.457779 sshd-session[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:33.473659 systemd-logind[1974]: New session 16 of user core. Jul 15 23:15:33.477864 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:15:33.507640 containerd[1995]: time="2025-07-15T23:15:33.506822152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7\" id:\"fc62d749d1a06e3b557c62fdfbfeb9f18b3787d7761fb33398615d0e0267f49f\" pid:6102 exited_at:{seconds:1752621333 nanos:506030716}" Jul 15 23:15:33.849627 sshd[6108]: Connection closed by 139.178.89.65 port 45758 Jul 15 23:15:33.850762 sshd-session[6087]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:33.860376 systemd[1]: sshd@15-172.31.19.30:22-139.178.89.65:45758.service: Deactivated successfully. Jul 15 23:15:33.869802 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:15:33.877063 systemd-logind[1974]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:15:33.900472 systemd[1]: Started sshd@16-172.31.19.30:22-139.178.89.65:45768.service - OpenSSH per-connection server daemon (139.178.89.65:45768). Jul 15 23:15:33.906829 systemd-logind[1974]: Removed session 16. Jul 15 23:15:34.124995 sshd[6123]: Accepted publickey for core from 139.178.89.65 port 45768 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:34.126395 sshd-session[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:34.137636 systemd-logind[1974]: New session 17 of user core. Jul 15 23:15:34.145874 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:15:34.861033 sshd[6125]: Connection closed by 139.178.89.65 port 45768 Jul 15 23:15:34.862742 sshd-session[6123]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:34.872305 systemd[1]: sshd@16-172.31.19.30:22-139.178.89.65:45768.service: Deactivated successfully. Jul 15 23:15:34.883506 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:15:34.888313 systemd-logind[1974]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:15:34.917178 systemd[1]: Started sshd@17-172.31.19.30:22-139.178.89.65:45774.service - OpenSSH per-connection server daemon (139.178.89.65:45774). Jul 15 23:15:34.921926 systemd-logind[1974]: Removed session 17. Jul 15 23:15:35.181962 sshd[6154]: Accepted publickey for core from 139.178.89.65 port 45774 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:35.185369 sshd-session[6154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:35.203718 systemd-logind[1974]: New session 18 of user core. Jul 15 23:15:35.214878 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:15:35.343716 containerd[1995]: time="2025-07-15T23:15:35.343646861Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2\" id:\"69a6bf0f1ff45a15f59c3b761497bd33ac431500ac6b20002a73ccf8e50ae07c\" pid:6147 exited_at:{seconds:1752621335 nanos:343201817}" Jul 15 23:15:35.712066 containerd[1995]: time="2025-07-15T23:15:35.711996895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215\" id:\"16846cbbfe8e6f8369b7ab08b5cd9fb81c1397a42fa9fd84fa3dc62d5d101303\" pid:6173 exited_at:{seconds:1752621335 nanos:711147319}" Jul 15 23:15:39.723598 sshd[6183]: Connection closed by 139.178.89.65 port 45774 Jul 15 23:15:39.720887 sshd-session[6154]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:39.732711 systemd[1]: sshd@17-172.31.19.30:22-139.178.89.65:45774.service: Deactivated successfully. Jul 15 23:15:39.741057 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:15:39.742656 systemd[1]: session-18.scope: Consumed 1.122s CPU time, 79.4M memory peak. Jul 15 23:15:39.748957 systemd-logind[1974]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:15:39.781808 systemd[1]: Started sshd@18-172.31.19.30:22-139.178.89.65:58374.service - OpenSSH per-connection server daemon (139.178.89.65:58374). Jul 15 23:15:39.784420 systemd-logind[1974]: Removed session 18. Jul 15 23:15:40.022622 sshd[6200]: Accepted publickey for core from 139.178.89.65 port 58374 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:40.025787 sshd-session[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:40.042664 systemd-logind[1974]: New session 19 of user core. Jul 15 23:15:40.048861 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:15:40.781597 sshd[6205]: Connection closed by 139.178.89.65 port 58374 Jul 15 23:15:40.783277 sshd-session[6200]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:40.791602 systemd[1]: sshd@18-172.31.19.30:22-139.178.89.65:58374.service: Deactivated successfully. Jul 15 23:15:40.800059 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:15:40.803548 systemd-logind[1974]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:15:40.827247 systemd[1]: Started sshd@19-172.31.19.30:22-139.178.89.65:58388.service - OpenSSH per-connection server daemon (139.178.89.65:58388). Jul 15 23:15:40.830655 systemd-logind[1974]: Removed session 19. Jul 15 23:15:41.047650 sshd[6215]: Accepted publickey for core from 139.178.89.65 port 58388 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:41.050216 sshd-session[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:41.059833 systemd-logind[1974]: New session 20 of user core. Jul 15 23:15:41.069224 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:15:41.361697 sshd[6217]: Connection closed by 139.178.89.65 port 58388 Jul 15 23:15:41.363124 sshd-session[6215]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:41.373634 systemd-logind[1974]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:15:41.375722 systemd[1]: sshd@19-172.31.19.30:22-139.178.89.65:58388.service: Deactivated successfully. Jul 15 23:15:41.380528 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:15:41.388475 systemd-logind[1974]: Removed session 20. Jul 15 23:15:46.405055 systemd[1]: Started sshd@20-172.31.19.30:22-139.178.89.65:58398.service - OpenSSH per-connection server daemon (139.178.89.65:58398). Jul 15 23:15:46.664836 sshd[6229]: Accepted publickey for core from 139.178.89.65 port 58398 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:46.668836 sshd-session[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:46.678783 systemd-logind[1974]: New session 21 of user core. Jul 15 23:15:46.685886 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 23:15:47.012290 sshd[6231]: Connection closed by 139.178.89.65 port 58398 Jul 15 23:15:47.013253 sshd-session[6229]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:47.024071 systemd[1]: sshd@20-172.31.19.30:22-139.178.89.65:58398.service: Deactivated successfully. Jul 15 23:15:47.031496 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 23:15:47.035473 systemd-logind[1974]: Session 21 logged out. Waiting for processes to exit. Jul 15 23:15:47.040206 systemd-logind[1974]: Removed session 21. Jul 15 23:15:52.056099 systemd[1]: Started sshd@21-172.31.19.30:22-139.178.89.65:43566.service - OpenSSH per-connection server daemon (139.178.89.65:43566). Jul 15 23:15:52.268298 sshd[6246]: Accepted publickey for core from 139.178.89.65 port 43566 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:52.271229 sshd-session[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:52.281747 systemd-logind[1974]: New session 22 of user core. Jul 15 23:15:52.290907 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 23:15:52.607454 sshd[6248]: Connection closed by 139.178.89.65 port 43566 Jul 15 23:15:52.609156 sshd-session[6246]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:52.620071 systemd[1]: sshd@21-172.31.19.30:22-139.178.89.65:43566.service: Deactivated successfully. Jul 15 23:15:52.625168 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 23:15:52.635791 systemd-logind[1974]: Session 22 logged out. Waiting for processes to exit. Jul 15 23:15:52.641650 systemd-logind[1974]: Removed session 22. Jul 15 23:15:57.650018 systemd[1]: Started sshd@22-172.31.19.30:22-139.178.89.65:43578.service - OpenSSH per-connection server daemon (139.178.89.65:43578). Jul 15 23:15:57.916716 sshd[6262]: Accepted publickey for core from 139.178.89.65 port 43578 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:57.918465 sshd-session[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:57.932994 systemd-logind[1974]: New session 23 of user core. Jul 15 23:15:57.940230 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 23:15:58.238750 sshd[6264]: Connection closed by 139.178.89.65 port 43578 Jul 15 23:15:58.239787 sshd-session[6262]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:58.250591 systemd[1]: sshd@22-172.31.19.30:22-139.178.89.65:43578.service: Deactivated successfully. Jul 15 23:15:58.255960 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 23:15:58.261832 systemd-logind[1974]: Session 23 logged out. Waiting for processes to exit. Jul 15 23:15:58.268133 systemd-logind[1974]: Removed session 23. Jul 15 23:16:03.280531 systemd[1]: Started sshd@23-172.31.19.30:22-139.178.89.65:47420.service - OpenSSH per-connection server daemon (139.178.89.65:47420). Jul 15 23:16:03.518486 containerd[1995]: time="2025-07-15T23:16:03.518403117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215\" id:\"729558704dcbc2df439a5fdf0c0eb4d92156756ad34c97ea943c93f3f289473d\" pid:6290 exited_at:{seconds:1752621363 nanos:517650573}" Jul 15 23:16:03.524513 sshd[6295]: Accepted publickey for core from 139.178.89.65 port 47420 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:16:03.533345 sshd-session[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:16:03.546660 systemd-logind[1974]: New session 24 of user core. Jul 15 23:16:03.549887 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 23:16:03.570236 containerd[1995]: time="2025-07-15T23:16:03.569525542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7\" id:\"5af71a859ca2bf90f588b49a15dadac8bce5eb6046fe9b10841d358538a15a0e\" pid:6315 exited_at:{seconds:1752621363 nanos:569091574}" Jul 15 23:16:03.838269 sshd[6325]: Connection closed by 139.178.89.65 port 47420 Jul 15 23:16:03.838063 sshd-session[6295]: pam_unix(sshd:session): session closed for user core Jul 15 23:16:03.848657 systemd[1]: sshd@23-172.31.19.30:22-139.178.89.65:47420.service: Deactivated successfully. Jul 15 23:16:03.857520 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 23:16:03.861861 systemd-logind[1974]: Session 24 logged out. Waiting for processes to exit. Jul 15 23:16:03.869245 systemd-logind[1974]: Removed session 24. Jul 15 23:16:05.191389 containerd[1995]: time="2025-07-15T23:16:05.191326366Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215\" id:\"181e36909c242f12eb396ec042262c3253db7aab2c371c1d324cc7bc21da97ce\" pid:6372 exited_at:{seconds:1752621365 nanos:190936822}" Jul 15 23:16:05.202385 containerd[1995]: time="2025-07-15T23:16:05.202319854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2\" id:\"e26b165fefc57c4fd5db0c654697ab997ab14a5460a0ff671fa6f02d0c678a18\" pid:6349 exited_at:{seconds:1752621365 nanos:201880018}" Jul 15 23:16:08.873513 systemd[1]: Started sshd@24-172.31.19.30:22-139.178.89.65:47422.service - OpenSSH per-connection server daemon (139.178.89.65:47422). Jul 15 23:16:09.075087 sshd[6389]: Accepted publickey for core from 139.178.89.65 port 47422 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:16:09.079068 sshd-session[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:16:09.090510 systemd-logind[1974]: New session 25 of user core. Jul 15 23:16:09.100881 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 23:16:09.377710 sshd[6391]: Connection closed by 139.178.89.65 port 47422 Jul 15 23:16:09.378184 sshd-session[6389]: pam_unix(sshd:session): session closed for user core Jul 15 23:16:09.387820 systemd[1]: sshd@24-172.31.19.30:22-139.178.89.65:47422.service: Deactivated successfully. Jul 15 23:16:09.392472 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 23:16:09.397901 systemd-logind[1974]: Session 25 logged out. Waiting for processes to exit. Jul 15 23:16:09.401479 systemd-logind[1974]: Removed session 25. Jul 15 23:16:14.418797 systemd[1]: Started sshd@25-172.31.19.30:22-139.178.89.65:39434.service - OpenSSH per-connection server daemon (139.178.89.65:39434). Jul 15 23:16:14.632987 sshd[6406]: Accepted publickey for core from 139.178.89.65 port 39434 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:16:14.636436 sshd-session[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:16:14.647276 systemd-logind[1974]: New session 26 of user core. Jul 15 23:16:14.655867 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 23:16:14.948808 sshd[6415]: Connection closed by 139.178.89.65 port 39434 Jul 15 23:16:14.949846 sshd-session[6406]: pam_unix(sshd:session): session closed for user core Jul 15 23:16:14.961166 systemd[1]: sshd@25-172.31.19.30:22-139.178.89.65:39434.service: Deactivated successfully. Jul 15 23:16:14.968473 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 23:16:14.975873 systemd-logind[1974]: Session 26 logged out. Waiting for processes to exit. Jul 15 23:16:14.982407 systemd-logind[1974]: Removed session 26. Jul 15 23:16:20.912306 containerd[1995]: time="2025-07-15T23:16:20.912246772Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7\" id:\"2b763dcbc1f3eff2c8276c9e88fae237e56601bbd1dad2e8c79c2d5338589f6e\" pid:6440 exited_at:{seconds:1752621380 nanos:911296144}" Jul 15 23:16:28.669225 systemd[1]: cri-containerd-6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9.scope: Deactivated successfully. Jul 15 23:16:28.669817 systemd[1]: cri-containerd-6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9.scope: Consumed 7.394s CPU time, 58.8M memory peak, 132K read from disk. Jul 15 23:16:28.680967 containerd[1995]: time="2025-07-15T23:16:28.680915038Z" level=info msg="received exit event container_id:\"6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9\" id:\"6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9\" pid:3128 exit_status:1 exited_at:{seconds:1752621388 nanos:680493586}" Jul 15 23:16:28.682623 containerd[1995]: time="2025-07-15T23:16:28.681236794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9\" id:\"6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9\" pid:3128 exit_status:1 exited_at:{seconds:1752621388 nanos:680493586}" Jul 15 23:16:28.729065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9-rootfs.mount: Deactivated successfully. Jul 15 23:16:29.182751 kubelet[3284]: I0715 23:16:29.182702 3284 scope.go:117] "RemoveContainer" containerID="6defb3d37fb02233a5195f3c1b1b8d6b38267d30d523af5aefb5d9e0fb861dd9" Jul 15 23:16:29.215297 containerd[1995]: time="2025-07-15T23:16:29.215191389Z" level=info msg="CreateContainer within sandbox \"d935c5b6f93062838f75f06c9aaedca2e05b5a018f2946bf6cab3cc3e6f66377\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 15 23:16:29.236597 containerd[1995]: time="2025-07-15T23:16:29.233835585Z" level=info msg="Container ef03912b930eaa06fa623d3569d26a8df43fe955cffd95ab32de2fe3f347bb5f: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:16:29.257601 containerd[1995]: time="2025-07-15T23:16:29.257504817Z" level=info msg="CreateContainer within sandbox \"d935c5b6f93062838f75f06c9aaedca2e05b5a018f2946bf6cab3cc3e6f66377\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ef03912b930eaa06fa623d3569d26a8df43fe955cffd95ab32de2fe3f347bb5f\"" Jul 15 23:16:29.258994 containerd[1995]: time="2025-07-15T23:16:29.258953121Z" level=info msg="StartContainer for \"ef03912b930eaa06fa623d3569d26a8df43fe955cffd95ab32de2fe3f347bb5f\"" Jul 15 23:16:29.261357 containerd[1995]: time="2025-07-15T23:16:29.261297309Z" level=info msg="connecting to shim ef03912b930eaa06fa623d3569d26a8df43fe955cffd95ab32de2fe3f347bb5f" address="unix:///run/containerd/s/ec2ae781bffb2defa98b3fa94499e3beb4b639246067724028c5e44cbaff39fd" protocol=ttrpc version=3 Jul 15 23:16:29.300875 systemd[1]: Started cri-containerd-ef03912b930eaa06fa623d3569d26a8df43fe955cffd95ab32de2fe3f347bb5f.scope - libcontainer container ef03912b930eaa06fa623d3569d26a8df43fe955cffd95ab32de2fe3f347bb5f. Jul 15 23:16:29.391349 containerd[1995]: time="2025-07-15T23:16:29.391168990Z" level=info msg="StartContainer for \"ef03912b930eaa06fa623d3569d26a8df43fe955cffd95ab32de2fe3f347bb5f\" returns successfully" Jul 15 23:16:29.530695 kubelet[3284]: E0715 23:16:29.530607 3284 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-30?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 15 23:16:29.656814 systemd[1]: cri-containerd-8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52.scope: Deactivated successfully. Jul 15 23:16:29.657377 systemd[1]: cri-containerd-8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52.scope: Consumed 26.983s CPU time, 95.9M memory peak, 628K read from disk. Jul 15 23:16:29.664835 containerd[1995]: time="2025-07-15T23:16:29.664647923Z" level=info msg="received exit event container_id:\"8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52\" id:\"8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52\" pid:3787 exit_status:1 exited_at:{seconds:1752621389 nanos:663746867}" Jul 15 23:16:29.665302 containerd[1995]: time="2025-07-15T23:16:29.665239595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52\" id:\"8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52\" pid:3787 exit_status:1 exited_at:{seconds:1752621389 nanos:663746867}" Jul 15 23:16:29.710283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52-rootfs.mount: Deactivated successfully. Jul 15 23:16:30.195404 kubelet[3284]: I0715 23:16:30.195068 3284 scope.go:117] "RemoveContainer" containerID="8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52" Jul 15 23:16:30.199519 containerd[1995]: time="2025-07-15T23:16:30.199444558Z" level=info msg="CreateContainer within sandbox \"fdc75af76b7e9a1b9afd29cea568e6f8ad0cf79c9198d41d2043dada3a702a22\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 15 23:16:30.218966 containerd[1995]: time="2025-07-15T23:16:30.218896690Z" level=info msg="Container fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:16:30.238671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666234230.mount: Deactivated successfully. Jul 15 23:16:30.245425 containerd[1995]: time="2025-07-15T23:16:30.245336362Z" level=info msg="CreateContainer within sandbox \"fdc75af76b7e9a1b9afd29cea568e6f8ad0cf79c9198d41d2043dada3a702a22\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20\"" Jul 15 23:16:30.246822 containerd[1995]: time="2025-07-15T23:16:30.246187570Z" level=info msg="StartContainer for \"fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20\"" Jul 15 23:16:30.248642 containerd[1995]: time="2025-07-15T23:16:30.248538922Z" level=info msg="connecting to shim fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20" address="unix:///run/containerd/s/a9f790c3ef2eca6647a71deec11d72dfe2c02861916cd0e0be28fa80d7d2ea86" protocol=ttrpc version=3 Jul 15 23:16:30.296140 systemd[1]: Started cri-containerd-fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20.scope - libcontainer container fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20. Jul 15 23:16:30.376230 containerd[1995]: time="2025-07-15T23:16:30.376119935Z" level=info msg="StartContainer for \"fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20\" returns successfully" Jul 15 23:16:33.471868 containerd[1995]: time="2025-07-15T23:16:33.471769058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43f8d193b60bad0c320ef4d1a6152d3dd02ed3a97de8e2f425e7fb4cd085bfd7\" id:\"be4c6bce83dc9f882d98bc0f0c3f46ba5d4346929fd970517ce8a545cd932417\" pid:6563 exit_status:1 exited_at:{seconds:1752621393 nanos:470921798}" Jul 15 23:16:34.027644 systemd[1]: cri-containerd-c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a.scope: Deactivated successfully. Jul 15 23:16:34.029281 systemd[1]: cri-containerd-c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a.scope: Consumed 3.463s CPU time, 22.2M memory peak. Jul 15 23:16:34.034642 containerd[1995]: time="2025-07-15T23:16:34.034511401Z" level=info msg="received exit event container_id:\"c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a\" id:\"c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a\" pid:3119 exit_status:1 exited_at:{seconds:1752621394 nanos:33718441}" Jul 15 23:16:34.034960 containerd[1995]: time="2025-07-15T23:16:34.034762273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a\" id:\"c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a\" pid:3119 exit_status:1 exited_at:{seconds:1752621394 nanos:33718441}" Jul 15 23:16:34.082378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a-rootfs.mount: Deactivated successfully. Jul 15 23:16:34.216651 kubelet[3284]: I0715 23:16:34.216596 3284 scope.go:117] "RemoveContainer" containerID="c3f14aca721304682095c9701ca852263afcc65355561c82c9cd3ab9c0335b0a" Jul 15 23:16:34.220730 containerd[1995]: time="2025-07-15T23:16:34.220672082Z" level=info msg="CreateContainer within sandbox \"49f6d6f48bf1982501d34e0d14e657f8c7064a7ce6e90ce33cf289c6b34cfd12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 15 23:16:34.240034 containerd[1995]: time="2025-07-15T23:16:34.238843562Z" level=info msg="Container eba3ff7b1cc6857666d6406fdb864e14e66c8d642c6f31b7b479fd2bd9a8aa4a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:16:34.260981 containerd[1995]: time="2025-07-15T23:16:34.260895278Z" level=info msg="CreateContainer within sandbox \"49f6d6f48bf1982501d34e0d14e657f8c7064a7ce6e90ce33cf289c6b34cfd12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"eba3ff7b1cc6857666d6406fdb864e14e66c8d642c6f31b7b479fd2bd9a8aa4a\"" Jul 15 23:16:34.262072 containerd[1995]: time="2025-07-15T23:16:34.262013558Z" level=info msg="StartContainer for \"eba3ff7b1cc6857666d6406fdb864e14e66c8d642c6f31b7b479fd2bd9a8aa4a\"" Jul 15 23:16:34.264248 containerd[1995]: time="2025-07-15T23:16:34.264181082Z" level=info msg="connecting to shim eba3ff7b1cc6857666d6406fdb864e14e66c8d642c6f31b7b479fd2bd9a8aa4a" address="unix:///run/containerd/s/92c3a2ab358097b0776fcb71d511e27abc7ecf3ba259aff8836efad8848b89c4" protocol=ttrpc version=3 Jul 15 23:16:34.307883 systemd[1]: Started cri-containerd-eba3ff7b1cc6857666d6406fdb864e14e66c8d642c6f31b7b479fd2bd9a8aa4a.scope - libcontainer container eba3ff7b1cc6857666d6406fdb864e14e66c8d642c6f31b7b479fd2bd9a8aa4a. Jul 15 23:16:34.389724 containerd[1995]: time="2025-07-15T23:16:34.389639343Z" level=info msg="StartContainer for \"eba3ff7b1cc6857666d6406fdb864e14e66c8d642c6f31b7b479fd2bd9a8aa4a\" returns successfully" Jul 15 23:16:35.070595 containerd[1995]: time="2025-07-15T23:16:35.070517978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3250d3d476471a12916d0caff037079365e2348df47bb9689751581f42892d2\" id:\"308f7aae1643debe2f33dfba3e5974e13719578750ba44833beaa52029cefe36\" pid:6627 exited_at:{seconds:1752621395 nanos:70117106}" Jul 15 23:16:35.180325 containerd[1995]: time="2025-07-15T23:16:35.180000267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a90cb0c7bf20968dde07fcdd5ded177afe15fad516a46c444d740c9dbe41215\" id:\"d8a744b209c445457714f835e8c439c265d53a94b986cc2ac51cad2eeb20b848\" pid:6651 exited_at:{seconds:1752621395 nanos:179640999}" Jul 15 23:16:39.531728 kubelet[3284]: E0715 23:16:39.531510 3284 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-30?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 15 23:16:41.815425 systemd[1]: cri-containerd-fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20.scope: Deactivated successfully. Jul 15 23:16:41.819134 containerd[1995]: time="2025-07-15T23:16:41.818866812Z" level=info msg="received exit event container_id:\"fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20\" id:\"fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20\" pid:6530 exit_status:1 exited_at:{seconds:1752621401 nanos:816946320}" Jul 15 23:16:41.820467 containerd[1995]: time="2025-07-15T23:16:41.820407876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20\" id:\"fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20\" pid:6530 exit_status:1 exited_at:{seconds:1752621401 nanos:816946320}" Jul 15 23:16:41.875877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20-rootfs.mount: Deactivated successfully. Jul 15 23:16:42.253691 kubelet[3284]: I0715 23:16:42.253524 3284 scope.go:117] "RemoveContainer" containerID="8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52" Jul 15 23:16:42.254471 kubelet[3284]: I0715 23:16:42.253972 3284 scope.go:117] "RemoveContainer" containerID="fdd5c31661cbc795ec00aa5256fea2bb3f723acd3d1ad977cdb8b16cfb9f2b20" Jul 15 23:16:42.257187 kubelet[3284]: E0715 23:16:42.257006 3284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-9czms_tigera-operator(3f441fe2-ab9b-4666-ab94-34052535b4f0)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-9czms" podUID="3f441fe2-ab9b-4666-ab94-34052535b4f0" Jul 15 23:16:42.259538 containerd[1995]: time="2025-07-15T23:16:42.259422118Z" level=info msg="RemoveContainer for \"8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52\"" Jul 15 23:16:42.271118 containerd[1995]: time="2025-07-15T23:16:42.271066282Z" level=info msg="RemoveContainer for \"8fd9f9baa1ec0e91757fc99b986452b95cff3b751fe86ad6b8baa51c41964d52\" returns successfully" Jul 15 23:16:49.532515 kubelet[3284]: E0715 23:16:49.532425 3284 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-30?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"