Sep 4 17:09:42.185160 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 4 17:09:42.185208 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Wed Sep 4 15:52:28 -00 2024 Sep 4 17:09:42.185344 kernel: KASLR disabled due to lack of seed Sep 4 17:09:42.185363 kernel: efi: EFI v2.7 by EDK II Sep 4 17:09:42.185380 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Sep 4 17:09:42.185396 kernel: ACPI: Early table checksum verification disabled Sep 4 17:09:42.185413 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 4 17:09:42.185429 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 17:09:42.185446 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:09:42.185461 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 4 17:09:42.185481 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:09:42.185497 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 4 17:09:42.185513 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 4 17:09:42.185529 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 4 17:09:42.185547 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:09:42.185568 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 4 17:09:42.185608 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 4 17:09:42.185626 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 4 17:09:42.185643 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 4 17:09:42.185660 kernel: printk: bootconsole [uart0] enabled Sep 4 17:09:42.185677 kernel: NUMA: Failed to initialise from firmware Sep 4 17:09:42.185694 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:09:42.185711 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 4 17:09:42.185728 kernel: Zone ranges: Sep 4 17:09:42.185744 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 17:09:42.185760 kernel: DMA32 empty Sep 4 17:09:42.185782 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 4 17:09:42.185799 kernel: Movable zone start for each node Sep 4 17:09:42.185815 kernel: Early memory node ranges Sep 4 17:09:42.185831 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 4 17:09:42.185848 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 4 17:09:42.185864 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 4 17:09:42.185880 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 4 17:09:42.185897 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 4 17:09:42.185913 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 4 17:09:42.185929 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 4 17:09:42.185945 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 4 17:09:42.185962 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:09:42.185982 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 4 17:09:42.185999 kernel: psci: probing for conduit method from ACPI. Sep 4 17:09:42.186023 kernel: psci: PSCIv1.0 detected in firmware. Sep 4 17:09:42.186040 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:09:42.186058 kernel: psci: Trusted OS migration not required Sep 4 17:09:42.186080 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:09:42.186098 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:09:42.186115 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:09:42.186133 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 17:09:42.186151 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:09:42.186208 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:09:42.186246 kernel: CPU features: detected: Spectre-v2 Sep 4 17:09:42.186342 kernel: CPU features: detected: Spectre-v3a Sep 4 17:09:42.186362 kernel: CPU features: detected: Spectre-BHB Sep 4 17:09:42.186379 kernel: CPU features: detected: ARM erratum 1742098 Sep 4 17:09:42.186397 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 4 17:09:42.186422 kernel: alternatives: applying boot alternatives Sep 4 17:09:42.186442 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:09:42.186461 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:09:42.186479 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:09:42.186497 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:09:42.186514 kernel: Fallback order for Node 0: 0 Sep 4 17:09:42.186532 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 4 17:09:42.186549 kernel: Policy zone: Normal Sep 4 17:09:42.186566 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:09:42.186584 kernel: software IO TLB: area num 2. Sep 4 17:09:42.186601 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 4 17:09:42.186624 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8076K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Sep 4 17:09:42.186642 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:09:42.186660 kernel: trace event string verifier disabled Sep 4 17:09:42.186677 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:09:42.186695 kernel: rcu: RCU event tracing is enabled. Sep 4 17:09:42.186714 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:09:42.186731 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:09:42.186750 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:09:42.186767 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:09:42.186785 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:09:42.186802 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:09:42.186824 kernel: GICv3: 96 SPIs implemented Sep 4 17:09:42.186842 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:09:42.186859 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:09:42.186877 kernel: GICv3: GICv3 features: 16 PPIs Sep 4 17:09:42.186895 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 4 17:09:42.186913 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 4 17:09:42.186931 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:09:42.186949 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:09:42.186966 kernel: GICv3: using LPI property table @0x00000004000e0000 Sep 4 17:09:42.186984 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 4 17:09:42.187002 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Sep 4 17:09:42.187019 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:09:42.187041 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 4 17:09:42.187060 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 4 17:09:42.187078 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 4 17:09:42.187095 kernel: Console: colour dummy device 80x25 Sep 4 17:09:42.187114 kernel: printk: console [tty1] enabled Sep 4 17:09:42.187132 kernel: ACPI: Core revision 20230628 Sep 4 17:09:42.187150 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 4 17:09:42.187168 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:09:42.187186 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:09:42.187205 kernel: SELinux: Initializing. Sep 4 17:09:42.187274 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:09:42.187294 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:09:42.187312 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:09:42.187331 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:09:42.187348 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:09:42.187367 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:09:42.187385 kernel: Platform MSI: ITS@0x10080000 domain created Sep 4 17:09:42.187403 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 4 17:09:42.187421 kernel: Remapping and enabling EFI services. Sep 4 17:09:42.187445 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:09:42.187463 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:09:42.187481 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 4 17:09:42.187499 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Sep 4 17:09:42.187517 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 4 17:09:42.187534 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:09:42.187552 kernel: SMP: Total of 2 processors activated. Sep 4 17:09:42.187570 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:09:42.187588 kernel: CPU features: detected: 32-bit EL1 Support Sep 4 17:09:42.187610 kernel: CPU features: detected: CRC32 instructions Sep 4 17:09:42.187628 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:09:42.187656 kernel: alternatives: applying system-wide alternatives Sep 4 17:09:42.187679 kernel: devtmpfs: initialized Sep 4 17:09:42.187698 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:09:42.187717 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:09:42.187735 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:09:42.187754 kernel: SMBIOS 3.0.0 present. Sep 4 17:09:42.187773 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 4 17:09:42.187796 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:09:42.187815 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:09:42.187834 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:09:42.187854 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:09:42.187872 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:09:42.187891 kernel: audit: type=2000 audit(0.293:1): state=initialized audit_enabled=0 res=1 Sep 4 17:09:42.187910 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:09:42.187932 kernel: cpuidle: using governor menu Sep 4 17:09:42.187952 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:09:42.187970 kernel: ASID allocator initialised with 65536 entries Sep 4 17:09:42.187989 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:09:42.188009 kernel: Serial: AMBA PL011 UART driver Sep 4 17:09:42.188029 kernel: Modules: 17600 pages in range for non-PLT usage Sep 4 17:09:42.188048 kernel: Modules: 509120 pages in range for PLT usage Sep 4 17:09:42.188068 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:09:42.188087 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:09:42.188111 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:09:42.188131 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:09:42.188151 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:09:42.188171 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:09:42.188190 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:09:42.188210 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:09:42.188270 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:09:42.188290 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:09:42.188309 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:09:42.188335 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:09:42.188354 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:09:42.188373 kernel: ACPI: Interpreter enabled Sep 4 17:09:42.188392 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:09:42.188410 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:09:42.188429 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 4 17:09:42.188739 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:09:42.188943 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:09:42.189140 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:09:42.189375 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 4 17:09:42.189592 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 4 17:09:42.189636 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 4 17:09:42.189682 kernel: acpiphp: Slot [1] registered Sep 4 17:09:42.189703 kernel: acpiphp: Slot [2] registered Sep 4 17:09:42.189722 kernel: acpiphp: Slot [3] registered Sep 4 17:09:42.189741 kernel: acpiphp: Slot [4] registered Sep 4 17:09:42.189760 kernel: acpiphp: Slot [5] registered Sep 4 17:09:42.189786 kernel: acpiphp: Slot [6] registered Sep 4 17:09:42.189805 kernel: acpiphp: Slot [7] registered Sep 4 17:09:42.189823 kernel: acpiphp: Slot [8] registered Sep 4 17:09:42.189841 kernel: acpiphp: Slot [9] registered Sep 4 17:09:42.189859 kernel: acpiphp: Slot [10] registered Sep 4 17:09:42.189878 kernel: acpiphp: Slot [11] registered Sep 4 17:09:42.189896 kernel: acpiphp: Slot [12] registered Sep 4 17:09:42.189914 kernel: acpiphp: Slot [13] registered Sep 4 17:09:42.189933 kernel: acpiphp: Slot [14] registered Sep 4 17:09:42.189955 kernel: acpiphp: Slot [15] registered Sep 4 17:09:42.189974 kernel: acpiphp: Slot [16] registered Sep 4 17:09:42.189992 kernel: acpiphp: Slot [17] registered Sep 4 17:09:42.190010 kernel: acpiphp: Slot [18] registered Sep 4 17:09:42.190028 kernel: acpiphp: Slot [19] registered Sep 4 17:09:42.190047 kernel: acpiphp: Slot [20] registered Sep 4 17:09:42.190065 kernel: acpiphp: Slot [21] registered Sep 4 17:09:42.190084 kernel: acpiphp: Slot [22] registered Sep 4 17:09:42.190102 kernel: acpiphp: Slot [23] registered Sep 4 17:09:42.190121 kernel: acpiphp: Slot [24] registered Sep 4 17:09:42.190144 kernel: acpiphp: Slot [25] registered Sep 4 17:09:42.190162 kernel: acpiphp: Slot [26] registered Sep 4 17:09:42.190181 kernel: acpiphp: Slot [27] registered Sep 4 17:09:42.190199 kernel: acpiphp: Slot [28] registered Sep 4 17:09:42.190235 kernel: acpiphp: Slot [29] registered Sep 4 17:09:42.190259 kernel: acpiphp: Slot [30] registered Sep 4 17:09:42.190278 kernel: acpiphp: Slot [31] registered Sep 4 17:09:42.190297 kernel: PCI host bridge to bus 0000:00 Sep 4 17:09:42.190552 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 4 17:09:42.190773 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:09:42.190972 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 4 17:09:42.191167 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 4 17:09:42.191492 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 4 17:09:42.191728 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 4 17:09:42.191941 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 4 17:09:42.192172 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:09:42.192435 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 4 17:09:42.192641 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:09:42.192856 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:09:42.193057 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 4 17:09:42.195360 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 4 17:09:42.195610 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 4 17:09:42.195824 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:09:42.196026 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 4 17:09:42.196257 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 4 17:09:42.196474 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 4 17:09:42.196676 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 4 17:09:42.196884 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 4 17:09:42.197074 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 4 17:09:42.199386 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:09:42.199603 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 4 17:09:42.199629 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:09:42.199649 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:09:42.199669 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:09:42.199687 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:09:42.199706 kernel: iommu: Default domain type: Translated Sep 4 17:09:42.199725 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:09:42.199753 kernel: efivars: Registered efivars operations Sep 4 17:09:42.199772 kernel: vgaarb: loaded Sep 4 17:09:42.199790 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:09:42.199809 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:09:42.199827 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:09:42.199846 kernel: pnp: PnP ACPI init Sep 4 17:09:42.200051 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 4 17:09:42.200080 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:09:42.200104 kernel: NET: Registered PF_INET protocol family Sep 4 17:09:42.200123 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:09:42.200142 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:09:42.200161 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:09:42.200180 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:09:42.200199 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:09:42.200235 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:09:42.200259 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:09:42.200278 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:09:42.200303 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:09:42.200322 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:09:42.200340 kernel: kvm [1]: HYP mode not available Sep 4 17:09:42.200359 kernel: Initialise system trusted keyrings Sep 4 17:09:42.200378 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:09:42.200396 kernel: Key type asymmetric registered Sep 4 17:09:42.200414 kernel: Asymmetric key parser 'x509' registered Sep 4 17:09:42.200433 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:09:42.200451 kernel: io scheduler mq-deadline registered Sep 4 17:09:42.200474 kernel: io scheduler kyber registered Sep 4 17:09:42.200493 kernel: io scheduler bfq registered Sep 4 17:09:42.200709 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 4 17:09:42.200737 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:09:42.200755 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:09:42.200775 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 4 17:09:42.200793 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 17:09:42.200812 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:09:42.200837 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 17:09:42.201048 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 4 17:09:42.201076 kernel: printk: console [ttyS0] disabled Sep 4 17:09:42.201095 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 4 17:09:42.201114 kernel: printk: console [ttyS0] enabled Sep 4 17:09:42.201133 kernel: printk: bootconsole [uart0] disabled Sep 4 17:09:42.201152 kernel: thunder_xcv, ver 1.0 Sep 4 17:09:42.201171 kernel: thunder_bgx, ver 1.0 Sep 4 17:09:42.201189 kernel: nicpf, ver 1.0 Sep 4 17:09:42.201208 kernel: nicvf, ver 1.0 Sep 4 17:09:42.203648 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:09:42.203843 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:09:41 UTC (1725469781) Sep 4 17:09:42.203870 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:09:42.203889 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 4 17:09:42.203909 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:09:42.203927 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:09:42.203946 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:09:42.203964 kernel: Segment Routing with IPv6 Sep 4 17:09:42.203993 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:09:42.204012 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:09:42.204030 kernel: Key type dns_resolver registered Sep 4 17:09:42.204049 kernel: registered taskstats version 1 Sep 4 17:09:42.204068 kernel: Loading compiled-in X.509 certificates Sep 4 17:09:42.204088 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 1f5b9f288f9cae6ec9698678cdc0f614482066f7' Sep 4 17:09:42.204107 kernel: Key type .fscrypt registered Sep 4 17:09:42.204125 kernel: Key type fscrypt-provisioning registered Sep 4 17:09:42.204144 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:09:42.204167 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:09:42.204185 kernel: ima: No architecture policies found Sep 4 17:09:42.204204 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:09:42.204244 kernel: clk: Disabling unused clocks Sep 4 17:09:42.204265 kernel: Freeing unused kernel memory: 39040K Sep 4 17:09:42.204284 kernel: Run /init as init process Sep 4 17:09:42.204302 kernel: with arguments: Sep 4 17:09:42.204321 kernel: /init Sep 4 17:09:42.204339 kernel: with environment: Sep 4 17:09:42.204363 kernel: HOME=/ Sep 4 17:09:42.204382 kernel: TERM=linux Sep 4 17:09:42.204400 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:09:42.204423 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:09:42.204446 systemd[1]: Detected virtualization amazon. Sep 4 17:09:42.204468 systemd[1]: Detected architecture arm64. Sep 4 17:09:42.204489 systemd[1]: Running in initrd. Sep 4 17:09:42.204509 systemd[1]: No hostname configured, using default hostname. Sep 4 17:09:42.204533 systemd[1]: Hostname set to . Sep 4 17:09:42.204553 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:09:42.204573 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:09:42.204593 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:09:42.204614 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:09:42.204635 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:09:42.204656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:09:42.204681 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:09:42.204702 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:09:42.204725 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:09:42.204746 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:09:42.204767 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:09:42.204787 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:09:42.204807 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:09:42.204832 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:09:42.204852 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:09:42.204872 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:09:42.204892 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:09:42.204912 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:09:42.204933 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:09:42.204953 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:09:42.204973 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:09:42.204993 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:09:42.205018 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:09:42.205039 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:09:42.205059 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:09:42.205079 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:09:42.205099 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:09:42.205119 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:09:42.205140 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:09:42.205161 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:09:42.205185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:09:42.205206 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:09:42.207507 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:09:42.207575 systemd-journald[250]: Collecting audit messages is disabled. Sep 4 17:09:42.207627 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:09:42.207650 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:09:42.207670 systemd-journald[250]: Journal started Sep 4 17:09:42.207712 systemd-journald[250]: Runtime Journal (/run/log/journal/ec27c8b526501ca5e5fdd2618d305694) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:09:42.184536 systemd-modules-load[251]: Inserted module 'overlay' Sep 4 17:09:42.215245 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:09:42.224256 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:09:42.229481 kernel: Bridge firewalling registered Sep 4 17:09:42.226609 systemd-modules-load[251]: Inserted module 'br_netfilter' Sep 4 17:09:42.227918 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:09:42.230628 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:09:42.247773 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:09:42.263896 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:09:42.269661 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:09:42.286748 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:09:42.315550 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:09:42.322758 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:09:42.334421 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:09:42.342559 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:09:42.354714 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:09:42.368562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:09:42.372322 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:09:42.409860 dracut-cmdline[286]: dracut-dracut-053 Sep 4 17:09:42.415266 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:09:42.458535 systemd-resolved[288]: Positive Trust Anchors: Sep 4 17:09:42.460481 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:09:42.463461 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:09:42.581259 kernel: SCSI subsystem initialized Sep 4 17:09:42.590242 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:09:42.602253 kernel: iscsi: registered transport (tcp) Sep 4 17:09:42.624892 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:09:42.624968 kernel: QLogic iSCSI HBA Driver Sep 4 17:09:42.686258 kernel: random: crng init done Sep 4 17:09:42.686509 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 4 17:09:42.690067 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:09:42.705908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:09:42.716342 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:09:42.727673 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:09:42.766299 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:09:42.766387 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:09:42.766417 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:09:42.843268 kernel: raid6: neonx8 gen() 6669 MB/s Sep 4 17:09:42.860272 kernel: raid6: neonx4 gen() 6518 MB/s Sep 4 17:09:42.877262 kernel: raid6: neonx2 gen() 5452 MB/s Sep 4 17:09:42.894257 kernel: raid6: neonx1 gen() 3956 MB/s Sep 4 17:09:42.911249 kernel: raid6: int64x8 gen() 3833 MB/s Sep 4 17:09:42.928248 kernel: raid6: int64x4 gen() 3730 MB/s Sep 4 17:09:42.945249 kernel: raid6: int64x2 gen() 3613 MB/s Sep 4 17:09:42.963029 kernel: raid6: int64x1 gen() 2762 MB/s Sep 4 17:09:42.963064 kernel: raid6: using algorithm neonx8 gen() 6669 MB/s Sep 4 17:09:42.980954 kernel: raid6: .... xor() 4831 MB/s, rmw enabled Sep 4 17:09:42.981008 kernel: raid6: using neon recovery algorithm Sep 4 17:09:42.990057 kernel: xor: measuring software checksum speed Sep 4 17:09:42.990122 kernel: 8regs : 11104 MB/sec Sep 4 17:09:42.992249 kernel: 32regs : 12011 MB/sec Sep 4 17:09:42.994464 kernel: arm64_neon : 9338 MB/sec Sep 4 17:09:42.994499 kernel: xor: using function: 32regs (12011 MB/sec) Sep 4 17:09:43.082271 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:09:43.101058 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:09:43.113520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:09:43.148107 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 4 17:09:43.155879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:09:43.176737 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:09:43.204915 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Sep 4 17:09:43.261937 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:09:43.272523 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:09:43.387437 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:09:43.399513 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:09:43.448649 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:09:43.459444 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:09:43.461931 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:09:43.464323 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:09:43.474742 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:09:43.525383 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:09:43.602260 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:09:43.602324 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 4 17:09:43.608198 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:09:43.608569 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:09:43.615364 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:09:43.615766 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:09:43.644666 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:74:ab:00:d4:03 Sep 4 17:09:43.624350 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:09:43.629027 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:09:43.629349 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:09:43.634163 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:09:43.645043 (udev-worker)[536]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:09:43.660604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:09:43.680309 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 17:09:43.680382 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:09:43.692249 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:09:43.700330 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:09:43.700396 kernel: GPT:9289727 != 16777215 Sep 4 17:09:43.703209 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:09:43.703256 kernel: GPT:9289727 != 16777215 Sep 4 17:09:43.703282 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:09:43.703307 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:09:43.714470 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:09:43.725584 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:09:43.777310 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:09:43.789024 kernel: BTRFS: device fsid 2be47701-3393-455e-86fc-33755ceb9c20 devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (539) Sep 4 17:09:43.830269 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (516) Sep 4 17:09:43.870883 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:09:43.932414 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:09:43.948021 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:09:43.953706 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:09:43.971047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:09:43.987539 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:09:43.998457 disk-uuid[661]: Primary Header is updated. Sep 4 17:09:43.998457 disk-uuid[661]: Secondary Entries is updated. Sep 4 17:09:43.998457 disk-uuid[661]: Secondary Header is updated. Sep 4 17:09:44.009251 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:09:44.020270 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:09:45.023263 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:09:45.025800 disk-uuid[662]: The operation has completed successfully. Sep 4 17:09:45.224278 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:09:45.226759 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:09:45.276537 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:09:45.292660 sh[923]: Success Sep 4 17:09:45.320654 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:09:45.425146 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:09:45.452412 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:09:45.459603 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:09:45.490635 kernel: BTRFS info (device dm-0): first mount of filesystem 2be47701-3393-455e-86fc-33755ceb9c20 Sep 4 17:09:45.490704 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:09:45.490745 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:09:45.492298 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:09:45.493511 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:09:45.513265 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:09:45.525965 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:09:45.528733 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:09:45.550669 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:09:45.559869 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:09:45.591303 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:09:45.591373 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:09:45.591409 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:09:45.601262 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:09:45.618133 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:09:45.621429 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:09:45.631744 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:09:45.639666 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:09:45.773306 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:09:45.787630 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:09:45.827016 ignition[1028]: Ignition 2.18.0 Sep 4 17:09:45.827041 ignition[1028]: Stage: fetch-offline Sep 4 17:09:45.830823 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:45.831847 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:45.833888 ignition[1028]: Ignition finished successfully Sep 4 17:09:45.838633 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:09:45.863967 systemd-networkd[1122]: lo: Link UP Sep 4 17:09:45.863984 systemd-networkd[1122]: lo: Gained carrier Sep 4 17:09:45.867138 systemd-networkd[1122]: Enumeration completed Sep 4 17:09:45.867600 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:09:45.868833 systemd-networkd[1122]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:09:45.868841 systemd-networkd[1122]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:09:45.874029 systemd[1]: Reached target network.target - Network. Sep 4 17:09:45.879443 systemd-networkd[1122]: eth0: Link UP Sep 4 17:09:45.879451 systemd-networkd[1122]: eth0: Gained carrier Sep 4 17:09:45.879470 systemd-networkd[1122]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:09:45.917213 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:09:45.921351 systemd-networkd[1122]: eth0: DHCPv4 address 172.31.29.2/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:09:45.962247 ignition[1126]: Ignition 2.18.0 Sep 4 17:09:45.962737 ignition[1126]: Stage: fetch Sep 4 17:09:45.963380 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:45.963406 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:45.963579 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:45.978089 ignition[1126]: PUT result: OK Sep 4 17:09:45.981083 ignition[1126]: parsed url from cmdline: "" Sep 4 17:09:45.981256 ignition[1126]: no config URL provided Sep 4 17:09:45.981277 ignition[1126]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:09:45.981307 ignition[1126]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:09:45.982434 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:45.985668 ignition[1126]: PUT result: OK Sep 4 17:09:45.985966 ignition[1126]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:09:45.989975 ignition[1126]: GET result: OK Sep 4 17:09:45.990584 ignition[1126]: parsing config with SHA512: 3d33a540c78e6a00c72a2bdb0c6952e7e4b0825d4d3042c594d8429ed80bae46461c6de2cc0d2abcb0376c9d58ee622d2d87b8177e0504351b0e12909db92d9e Sep 4 17:09:46.002550 unknown[1126]: fetched base config from "system" Sep 4 17:09:46.003164 unknown[1126]: fetched base config from "system" Sep 4 17:09:46.003180 unknown[1126]: fetched user config from "aws" Sep 4 17:09:46.010198 ignition[1126]: fetch: fetch complete Sep 4 17:09:46.010413 ignition[1126]: fetch: fetch passed Sep 4 17:09:46.010547 ignition[1126]: Ignition finished successfully Sep 4 17:09:46.017169 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:09:46.035666 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:09:46.063464 ignition[1134]: Ignition 2.18.0 Sep 4 17:09:46.063492 ignition[1134]: Stage: kargs Sep 4 17:09:46.065082 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:46.065108 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:46.065450 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:46.068461 ignition[1134]: PUT result: OK Sep 4 17:09:46.076594 ignition[1134]: kargs: kargs passed Sep 4 17:09:46.076689 ignition[1134]: Ignition finished successfully Sep 4 17:09:46.081744 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:09:46.094657 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:09:46.118208 ignition[1143]: Ignition 2.18.0 Sep 4 17:09:46.119062 ignition[1143]: Stage: disks Sep 4 17:09:46.120111 ignition[1143]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:46.120136 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:46.120300 ignition[1143]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:46.122010 ignition[1143]: PUT result: OK Sep 4 17:09:46.131715 ignition[1143]: disks: disks passed Sep 4 17:09:46.131835 ignition[1143]: Ignition finished successfully Sep 4 17:09:46.135681 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:09:46.139639 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:09:46.141870 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:09:46.145087 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:09:46.151307 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:09:46.153324 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:09:46.176686 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:09:46.230197 systemd-fsck[1152]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:09:46.235295 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:09:46.246533 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:09:46.333264 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f2f4f3ba-c5a3-49c0-ace4-444935e9934b r/w with ordered data mode. Quota mode: none. Sep 4 17:09:46.334160 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:09:46.340004 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:09:46.363524 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:09:46.370467 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:09:46.376550 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:09:46.376660 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:09:46.376716 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:09:46.402979 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1171) Sep 4 17:09:46.407067 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:09:46.407133 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:09:46.409176 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:09:46.415277 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:09:46.422664 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:09:46.432680 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:09:46.436495 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:09:46.531127 initrd-setup-root[1195]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:09:46.541191 initrd-setup-root[1202]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:09:46.550251 initrd-setup-root[1209]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:09:46.560830 initrd-setup-root[1216]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:09:46.735180 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:09:46.747443 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:09:46.767667 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:09:46.785501 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:09:46.788077 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:09:46.821367 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:09:46.832469 ignition[1284]: INFO : Ignition 2.18.0 Sep 4 17:09:46.832469 ignition[1284]: INFO : Stage: mount Sep 4 17:09:46.835851 ignition[1284]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:46.835851 ignition[1284]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:46.835851 ignition[1284]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:46.842959 ignition[1284]: INFO : PUT result: OK Sep 4 17:09:46.847915 ignition[1284]: INFO : mount: mount passed Sep 4 17:09:46.850890 ignition[1284]: INFO : Ignition finished successfully Sep 4 17:09:46.853444 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:09:46.862552 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:09:46.898647 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:09:46.937450 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1298) Sep 4 17:09:46.941491 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:09:46.941585 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:09:46.941615 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:09:46.949299 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:09:46.951404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:09:46.990717 ignition[1315]: INFO : Ignition 2.18.0 Sep 4 17:09:46.990717 ignition[1315]: INFO : Stage: files Sep 4 17:09:46.995457 ignition[1315]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:46.995457 ignition[1315]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:46.995457 ignition[1315]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:46.995457 ignition[1315]: INFO : PUT result: OK Sep 4 17:09:47.007207 ignition[1315]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:09:47.010007 ignition[1315]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:09:47.010007 ignition[1315]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:09:47.017896 ignition[1315]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:09:47.020804 ignition[1315]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:09:47.023984 unknown[1315]: wrote ssh authorized keys file for user: core Sep 4 17:09:47.026295 ignition[1315]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:09:47.031191 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:09:47.035102 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:09:47.035102 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:09:47.035102 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:09:47.038399 systemd-networkd[1122]: eth0: Gained IPv6LL Sep 4 17:09:47.142784 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:09:47.241988 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:09:47.241988 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:09:47.250639 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Sep 4 17:09:47.767420 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:09:49.948267 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:09:49.952916 ignition[1315]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 4 17:09:49.956795 ignition[1315]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:09:49.961538 ignition[1315]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:09:49.961538 ignition[1315]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 4 17:09:49.961538 ignition[1315]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 4 17:09:49.971644 ignition[1315]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:09:49.975234 ignition[1315]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:09:49.975234 ignition[1315]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 4 17:09:49.981140 ignition[1315]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:09:49.984872 ignition[1315]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:09:49.984872 ignition[1315]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:09:49.984872 ignition[1315]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:09:49.984872 ignition[1315]: INFO : files: files passed Sep 4 17:09:49.984872 ignition[1315]: INFO : Ignition finished successfully Sep 4 17:09:50.002004 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:09:50.014704 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:09:50.026809 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:09:50.036011 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:09:50.036780 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:09:50.064153 initrd-setup-root-after-ignition[1344]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:09:50.064153 initrd-setup-root-after-ignition[1344]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:09:50.075332 initrd-setup-root-after-ignition[1348]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:09:50.081013 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:09:50.086191 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:09:50.104644 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:09:50.167956 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:09:50.170301 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:09:50.176848 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:09:50.180530 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:09:50.184170 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:09:50.198583 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:09:50.225122 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:09:50.242683 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:09:50.273084 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:09:50.278404 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:09:50.281917 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:09:50.287144 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:09:50.287774 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:09:50.294987 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:09:50.297994 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:09:50.303799 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:09:50.306555 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:09:50.309556 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:09:50.318333 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:09:50.321341 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:09:50.325511 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:09:50.327879 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:09:50.336179 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:09:50.338174 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:09:50.338732 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:09:50.346672 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:09:50.351185 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:09:50.355774 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:09:50.356083 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:09:50.360242 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:09:50.360507 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:09:50.367085 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:09:50.369286 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:09:50.376107 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:09:50.378233 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:09:50.389729 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:09:50.399872 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:09:50.402537 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:09:50.402908 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:09:50.405729 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:09:50.405991 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:09:50.440324 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:09:50.443349 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:09:50.450510 ignition[1368]: INFO : Ignition 2.18.0 Sep 4 17:09:50.452470 ignition[1368]: INFO : Stage: umount Sep 4 17:09:50.454811 ignition[1368]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:50.454811 ignition[1368]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:50.454811 ignition[1368]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:50.463095 ignition[1368]: INFO : PUT result: OK Sep 4 17:09:50.468735 ignition[1368]: INFO : umount: umount passed Sep 4 17:09:50.471947 ignition[1368]: INFO : Ignition finished successfully Sep 4 17:09:50.475598 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:09:50.476740 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:09:50.478328 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:09:50.481206 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:09:50.481448 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:09:50.484127 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:09:50.484279 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:09:50.488483 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:09:50.488597 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:09:50.490922 systemd[1]: Stopped target network.target - Network. Sep 4 17:09:50.499670 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:09:50.499802 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:09:50.502144 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:09:50.503891 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:09:50.509091 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:09:50.522023 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:09:50.524887 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:09:50.528154 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:09:50.528286 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:09:50.530431 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:09:50.530535 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:09:50.532589 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:09:50.532705 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:09:50.534763 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:09:50.534888 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:09:50.537916 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:09:50.545548 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:09:50.550198 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:09:50.550442 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:09:50.554333 systemd-networkd[1122]: eth0: DHCPv6 lease lost Sep 4 17:09:50.554812 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:09:50.555011 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:09:50.561765 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:09:50.563087 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:09:50.567263 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:09:50.567525 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:09:50.581330 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:09:50.581471 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:09:50.611420 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:09:50.630019 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:09:50.630161 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:09:50.637849 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:09:50.637966 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:09:50.640567 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:09:50.640689 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:09:50.643707 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:09:50.643825 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:09:50.646582 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:09:50.681965 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:09:50.682588 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:09:50.688581 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:09:50.691366 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:09:50.697419 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:09:50.697770 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:09:50.704660 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:09:50.704756 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:09:50.706991 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:09:50.707109 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:09:50.709718 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:09:50.709841 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:09:50.712364 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:09:50.712490 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:09:50.728154 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:09:50.745428 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:09:50.745590 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:09:50.748109 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:09:50.748263 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:09:50.751503 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:09:50.751616 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:09:50.757100 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:09:50.757256 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:09:50.768157 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:09:50.769329 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:09:50.779526 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:09:50.801594 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:09:50.824472 systemd[1]: Switching root. Sep 4 17:09:50.865571 systemd-journald[250]: Journal stopped Sep 4 17:09:52.986448 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Sep 4 17:09:52.986598 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:09:52.986660 kernel: SELinux: policy capability open_perms=1 Sep 4 17:09:52.988962 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:09:52.989018 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:09:52.989053 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:09:52.989099 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:09:52.989132 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:09:52.989163 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:09:52.989194 kernel: audit: type=1403 audit(1725469791.293:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:09:52.989343 systemd[1]: Successfully loaded SELinux policy in 48.704ms. Sep 4 17:09:52.989400 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.990ms. Sep 4 17:09:52.989438 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:09:52.989491 systemd[1]: Detected virtualization amazon. Sep 4 17:09:52.989526 systemd[1]: Detected architecture arm64. Sep 4 17:09:52.989559 systemd[1]: Detected first boot. Sep 4 17:09:52.989591 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:09:52.989623 zram_generator::config[1427]: No configuration found. Sep 4 17:09:52.989660 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:09:52.989700 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:09:52.989736 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:09:52.989769 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:09:52.989800 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:09:52.989833 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:09:52.989866 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:09:52.989899 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:09:52.989932 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:09:52.989970 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:09:52.990005 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:09:52.990037 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:09:52.990067 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:09:52.990097 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:09:52.990139 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:09:52.990170 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:09:52.990201 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:09:52.990265 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:09:52.990312 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:09:52.990346 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:09:52.990380 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:09:52.990411 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:09:52.990445 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:09:52.990476 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:09:52.990506 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:09:52.990536 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:09:52.990570 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:09:52.990600 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:09:52.990633 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:09:52.990663 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:09:52.990695 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:09:52.990728 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:09:52.990758 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:09:52.990790 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:09:52.990820 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:09:52.990857 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:09:52.990890 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:09:52.990920 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:09:52.990951 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:09:52.990984 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:09:52.991014 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:09:52.991050 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:09:52.991081 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:09:52.991115 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:09:52.991156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:09:52.995354 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:09:52.995397 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:09:52.995435 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:09:52.995470 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 4 17:09:52.995507 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 4 17:09:52.995537 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:09:52.995570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:09:52.995601 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:09:52.995640 kernel: fuse: init (API version 7.39) Sep 4 17:09:52.995673 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:09:52.995705 kernel: loop: module loaded Sep 4 17:09:52.995734 kernel: ACPI: bus type drm_connector registered Sep 4 17:09:52.995764 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:09:52.995796 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:09:52.995829 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:09:52.995858 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:09:52.995908 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:09:52.995943 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:09:52.995974 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:09:52.996004 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:09:52.996035 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:09:52.996068 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:09:52.996101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:09:52.996130 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:09:52.996161 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:09:52.996195 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:09:52.997550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:09:52.997617 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:09:52.997655 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:09:52.997687 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:09:52.997726 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:09:52.997757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:09:52.997848 systemd-journald[1529]: Collecting audit messages is disabled. Sep 4 17:09:52.997908 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:09:52.997944 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:09:52.997977 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:09:52.998008 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:09:52.998045 systemd-journald[1529]: Journal started Sep 4 17:09:52.998094 systemd-journald[1529]: Runtime Journal (/run/log/journal/ec27c8b526501ca5e5fdd2618d305694) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:09:53.008344 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:09:53.024945 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:09:53.025065 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:09:53.055463 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:09:53.070342 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:09:53.081260 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:09:53.081361 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:09:53.100306 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:09:53.115894 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:09:53.132319 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:09:53.144434 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:09:53.147141 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:09:53.152744 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:09:53.169392 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:09:53.205092 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:09:53.233707 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:09:53.248747 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:09:53.265327 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:09:53.283694 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:09:53.298272 systemd-journald[1529]: Time spent on flushing to /var/log/journal/ec27c8b526501ca5e5fdd2618d305694 is 61.289ms for 900 entries. Sep 4 17:09:53.298272 systemd-journald[1529]: System Journal (/var/log/journal/ec27c8b526501ca5e5fdd2618d305694) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:09:53.367902 systemd-journald[1529]: Received client request to flush runtime journal. Sep 4 17:09:53.304503 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Sep 4 17:09:53.304528 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Sep 4 17:09:53.325141 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:09:53.346603 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:09:53.359052 udevadm[1591]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:09:53.378872 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:09:53.433024 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:09:53.443714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:09:53.492425 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Sep 4 17:09:53.493069 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Sep 4 17:09:53.503336 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:09:54.196029 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:09:54.211611 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:09:54.274204 systemd-udevd[1607]: Using default interface naming scheme 'v255'. Sep 4 17:09:54.319501 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:09:54.335258 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:09:54.378135 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:09:54.498077 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 4 17:09:54.510521 (udev-worker)[1619]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:09:54.521432 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1609) Sep 4 17:09:54.543531 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:09:54.702317 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1616) Sep 4 17:09:54.716595 systemd-networkd[1610]: lo: Link UP Sep 4 17:09:54.717108 systemd-networkd[1610]: lo: Gained carrier Sep 4 17:09:54.719869 systemd-networkd[1610]: Enumeration completed Sep 4 17:09:54.720258 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:09:54.724918 systemd-networkd[1610]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:09:54.726158 systemd-networkd[1610]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:09:54.728814 systemd-networkd[1610]: eth0: Link UP Sep 4 17:09:54.729487 systemd-networkd[1610]: eth0: Gained carrier Sep 4 17:09:54.729687 systemd-networkd[1610]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:09:54.750031 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:09:54.759373 systemd-networkd[1610]: eth0: DHCPv4 address 172.31.29.2/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:09:55.014820 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:09:55.026738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:09:55.041098 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:09:55.067654 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:09:55.089151 lvm[1734]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:09:55.130682 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:09:55.134609 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:09:55.149561 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:09:55.162776 lvm[1739]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:09:55.166022 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:09:55.205138 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:09:55.211289 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:09:55.213710 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:09:55.213762 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:09:55.215828 systemd[1]: Reached target machines.target - Containers. Sep 4 17:09:55.220028 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:09:55.246660 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:09:55.252572 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:09:55.254880 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:09:55.260407 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:09:55.273673 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:09:55.284084 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:09:55.292170 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:09:55.324870 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:09:55.349298 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:09:55.355179 kernel: loop0: detected capacity change from 0 to 59688 Sep 4 17:09:55.355357 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:09:55.352191 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:09:55.387257 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:09:55.424277 kernel: loop1: detected capacity change from 0 to 193208 Sep 4 17:09:55.473290 kernel: loop2: detected capacity change from 0 to 113672 Sep 4 17:09:55.508768 kernel: loop3: detected capacity change from 0 to 51896 Sep 4 17:09:55.551351 kernel: loop4: detected capacity change from 0 to 59688 Sep 4 17:09:55.568326 kernel: loop5: detected capacity change from 0 to 193208 Sep 4 17:09:55.594257 kernel: loop6: detected capacity change from 0 to 113672 Sep 4 17:09:55.611260 kernel: loop7: detected capacity change from 0 to 51896 Sep 4 17:09:55.625479 (sd-merge)[1764]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:09:55.628097 (sd-merge)[1764]: Merged extensions into '/usr'. Sep 4 17:09:55.638199 systemd[1]: Reloading requested from client PID 1750 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:09:55.638263 systemd[1]: Reloading... Sep 4 17:09:55.782711 zram_generator::config[1790]: No configuration found. Sep 4 17:09:55.954390 ldconfig[1746]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:09:56.075679 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:09:56.218775 systemd[1]: Reloading finished in 579 ms. Sep 4 17:09:56.246522 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:09:56.249356 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:09:56.267531 systemd[1]: Starting ensure-sysext.service... Sep 4 17:09:56.280520 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:09:56.292472 systemd[1]: Reloading requested from client PID 1849 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:09:56.292499 systemd[1]: Reloading... Sep 4 17:09:56.352137 systemd-tmpfiles[1850]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:09:56.353960 systemd-tmpfiles[1850]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:09:56.358361 systemd-tmpfiles[1850]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:09:56.359129 systemd-tmpfiles[1850]: ACLs are not supported, ignoring. Sep 4 17:09:56.359441 systemd-tmpfiles[1850]: ACLs are not supported, ignoring. Sep 4 17:09:56.364862 systemd-tmpfiles[1850]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:09:56.365068 systemd-tmpfiles[1850]: Skipping /boot Sep 4 17:09:56.387079 systemd-tmpfiles[1850]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:09:56.387332 systemd-tmpfiles[1850]: Skipping /boot Sep 4 17:09:56.465258 zram_generator::config[1879]: No configuration found. Sep 4 17:09:56.700401 systemd-networkd[1610]: eth0: Gained IPv6LL Sep 4 17:09:56.705038 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:09:56.854796 systemd[1]: Reloading finished in 561 ms. Sep 4 17:09:56.883140 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:09:56.898303 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:09:56.914511 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:09:56.929626 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:09:56.937565 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:09:56.952468 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:09:56.959728 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:09:56.982013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:09:56.995184 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:09:57.007638 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:09:57.024795 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:09:57.026990 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:09:57.038910 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:09:57.040183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:09:57.056672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:09:57.057055 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:09:57.081065 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:09:57.087849 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:09:57.093532 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:09:57.096880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:09:57.118721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:09:57.136750 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:09:57.143691 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:09:57.163120 augenrules[1978]: No rules Sep 4 17:09:57.149780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:09:57.174114 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:09:57.177742 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:09:57.178560 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:09:57.200942 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:09:57.215055 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:09:57.229950 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:09:57.236885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:09:57.240848 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:09:57.244723 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:09:57.245161 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:09:57.248964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:09:57.249470 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:09:57.255138 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:09:57.261837 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:09:57.282771 systemd[1]: Finished ensure-sysext.service. Sep 4 17:09:57.303297 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:09:57.303463 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:09:57.303510 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:09:57.307800 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:09:57.354478 systemd-resolved[1942]: Positive Trust Anchors: Sep 4 17:09:57.354519 systemd-resolved[1942]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:09:57.354583 systemd-resolved[1942]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:09:57.362120 systemd-resolved[1942]: Defaulting to hostname 'linux'. Sep 4 17:09:57.365561 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:09:57.367810 systemd[1]: Reached target network.target - Network. Sep 4 17:09:57.369491 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:09:57.371446 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:09:57.373493 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:09:57.375498 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:09:57.377706 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:09:57.380151 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:09:57.382300 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:09:57.384558 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:09:57.386762 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:09:57.386810 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:09:57.388408 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:09:57.391347 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:09:57.396263 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:09:57.400321 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:09:57.409132 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:09:57.413396 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:09:57.415291 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:09:57.417356 systemd[1]: System is tainted: cgroupsv1 Sep 4 17:09:57.417462 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:09:57.417519 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:09:57.427627 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:09:57.436706 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:09:57.449935 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:09:57.465535 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:09:57.475545 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:09:57.477622 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:09:57.484551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:09:57.499333 jq[2011]: false Sep 4 17:09:57.514544 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:09:57.526640 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:09:57.541938 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:09:57.564935 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:09:57.581163 dbus-daemon[2010]: [system] SELinux support is enabled Sep 4 17:09:57.591071 dbus-daemon[2010]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1610 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:09:57.599080 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:09:57.610533 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:09:57.623544 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:09:57.638396 extend-filesystems[2012]: Found loop4 Sep 4 17:09:57.638396 extend-filesystems[2012]: Found loop5 Sep 4 17:09:57.638396 extend-filesystems[2012]: Found loop6 Sep 4 17:09:57.638396 extend-filesystems[2012]: Found loop7 Sep 4 17:09:57.638396 extend-filesystems[2012]: Found nvme0n1 Sep 4 17:09:57.638396 extend-filesystems[2012]: Found nvme0n1p1 Sep 4 17:09:57.638396 extend-filesystems[2012]: Found nvme0n1p2 Sep 4 17:09:57.668470 extend-filesystems[2012]: Found nvme0n1p3 Sep 4 17:09:57.668470 extend-filesystems[2012]: Found usr Sep 4 17:09:57.668470 extend-filesystems[2012]: Found nvme0n1p4 Sep 4 17:09:57.668470 extend-filesystems[2012]: Found nvme0n1p6 Sep 4 17:09:57.668470 extend-filesystems[2012]: Found nvme0n1p7 Sep 4 17:09:57.668470 extend-filesystems[2012]: Found nvme0n1p9 Sep 4 17:09:57.668470 extend-filesystems[2012]: Checking size of /dev/nvme0n1p9 Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:13:39 UTC 2024 (1): Starting Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: ---------------------------------------------------- Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: corporation. Support and training for ntp-4 are Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: available at https://www.nwtime.org/support Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: ---------------------------------------------------- Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: proto: precision = 0.108 usec (-23) Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: basedate set to 2024-08-23 Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: gps base set to 2024-08-25 (week 2329) Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: Listen normally on 3 eth0 172.31.29.2:123 Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: Listen normally on 4 lo [::1]:123 Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: Listen normally on 5 eth0 [fe80::474:abff:fe00:d403%2]:123 Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: Listening on routing socket on fd #22 for interface updates Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:09:57.694022 ntpd[2018]: 4 Sep 17:09:57 ntpd[2018]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:09:57.651763 ntpd[2018]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:13:39 UTC 2024 (1): Starting Sep 4 17:09:57.676052 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:09:57.651813 ntpd[2018]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:09:57.698184 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:09:57.651833 ntpd[2018]: ---------------------------------------------------- Sep 4 17:09:57.651852 ntpd[2018]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:09:57.651871 ntpd[2018]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:09:57.651891 ntpd[2018]: corporation. Support and training for ntp-4 are Sep 4 17:09:57.651910 ntpd[2018]: available at https://www.nwtime.org/support Sep 4 17:09:57.651928 ntpd[2018]: ---------------------------------------------------- Sep 4 17:09:57.655605 ntpd[2018]: proto: precision = 0.108 usec (-23) Sep 4 17:09:57.656560 ntpd[2018]: basedate set to 2024-08-23 Sep 4 17:09:57.656594 ntpd[2018]: gps base set to 2024-08-25 (week 2329) Sep 4 17:09:57.663184 ntpd[2018]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:09:57.663306 ntpd[2018]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:09:57.663599 ntpd[2018]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:09:57.663664 ntpd[2018]: Listen normally on 3 eth0 172.31.29.2:123 Sep 4 17:09:57.663733 ntpd[2018]: Listen normally on 4 lo [::1]:123 Sep 4 17:09:57.663807 ntpd[2018]: Listen normally on 5 eth0 [fe80::474:abff:fe00:d403%2]:123 Sep 4 17:09:57.663868 ntpd[2018]: Listening on routing socket on fd #22 for interface updates Sep 4 17:09:57.669198 ntpd[2018]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:09:57.669569 ntpd[2018]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:09:57.726530 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:09:57.734375 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:09:57.740023 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:09:57.754597 extend-filesystems[2012]: Resized partition /dev/nvme0n1p9 Sep 4 17:09:57.769893 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:09:57.771438 extend-filesystems[2051]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:09:57.770505 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:09:57.778238 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:09:57.792515 update_engine[2046]: I0904 17:09:57.781500 2046 main.cc:92] Flatcar Update Engine starting Sep 4 17:09:57.778751 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:09:57.790454 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:09:57.813479 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:09:57.813574 coreos-metadata[2008]: Sep 04 17:09:57.807 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:09:57.814078 update_engine[2046]: I0904 17:09:57.806460 2046 update_check_scheduler.cc:74] Next update check in 8m47s Sep 4 17:09:57.820718 coreos-metadata[2008]: Sep 04 17:09:57.816 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:09:57.820718 coreos-metadata[2008]: Sep 04 17:09:57.820 INFO Fetch successful Sep 4 17:09:57.820718 coreos-metadata[2008]: Sep 04 17:09:57.820 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:09:57.818111 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:09:57.821669 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:09:57.823547 coreos-metadata[2008]: Sep 04 17:09:57.822 INFO Fetch successful Sep 4 17:09:57.823547 coreos-metadata[2008]: Sep 04 17:09:57.822 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:09:57.834849 coreos-metadata[2008]: Sep 04 17:09:57.834 INFO Fetch successful Sep 4 17:09:57.834849 coreos-metadata[2008]: Sep 04 17:09:57.834 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:09:57.845460 coreos-metadata[2008]: Sep 04 17:09:57.844 INFO Fetch successful Sep 4 17:09:57.845460 coreos-metadata[2008]: Sep 04 17:09:57.844 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:09:57.845460 coreos-metadata[2008]: Sep 04 17:09:57.844 INFO Fetch failed with 404: resource not found Sep 4 17:09:57.845460 coreos-metadata[2008]: Sep 04 17:09:57.844 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:09:57.845706 jq[2050]: true Sep 4 17:09:57.857363 coreos-metadata[2008]: Sep 04 17:09:57.847 INFO Fetch successful Sep 4 17:09:57.857363 coreos-metadata[2008]: Sep 04 17:09:57.847 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:09:57.857363 coreos-metadata[2008]: Sep 04 17:09:57.849 INFO Fetch successful Sep 4 17:09:57.857363 coreos-metadata[2008]: Sep 04 17:09:57.849 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:09:57.857363 coreos-metadata[2008]: Sep 04 17:09:57.853 INFO Fetch successful Sep 4 17:09:57.857363 coreos-metadata[2008]: Sep 04 17:09:57.853 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:09:57.857363 coreos-metadata[2008]: Sep 04 17:09:57.856 INFO Fetch successful Sep 4 17:09:57.857363 coreos-metadata[2008]: Sep 04 17:09:57.856 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:09:57.870782 coreos-metadata[2008]: Sep 04 17:09:57.870 INFO Fetch successful Sep 4 17:09:57.923935 (ntainerd)[2067]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:09:57.988624 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:09:58.009364 tar[2057]: linux-arm64/helm Sep 4 17:09:57.972265 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:09:58.010046 jq[2066]: true Sep 4 17:09:57.989606 dbus-daemon[2010]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:09:57.972324 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:09:57.975534 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:09:57.975575 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:09:57.982979 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:09:57.986546 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:09:58.000494 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:09:58.041278 extend-filesystems[2051]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:09:58.041278 extend-filesystems[2051]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:09:58.041278 extend-filesystems[2051]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:09:58.037634 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:09:58.073612 extend-filesystems[2012]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:09:58.038176 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:09:58.120546 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:09:58.136705 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:09:58.145129 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:09:58.163342 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:09:58.173609 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:09:58.346162 bash[2129]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:09:58.346457 amazon-ssm-agent[2108]: Initializing new seelog logger Sep 4 17:09:58.346457 amazon-ssm-agent[2108]: New Seelog Logger Creation Complete Sep 4 17:09:58.346457 amazon-ssm-agent[2108]: 2024/09/04 17:09:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:09:58.346457 amazon-ssm-agent[2108]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:09:58.346457 amazon-ssm-agent[2108]: 2024/09/04 17:09:58 processing appconfig overrides Sep 4 17:09:58.357247 amazon-ssm-agent[2108]: 2024/09/04 17:09:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:09:58.357247 amazon-ssm-agent[2108]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:09:58.357247 amazon-ssm-agent[2108]: 2024/09/04 17:09:58 processing appconfig overrides Sep 4 17:09:58.357247 amazon-ssm-agent[2108]: 2024/09/04 17:09:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:09:58.357247 amazon-ssm-agent[2108]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:09:58.357247 amazon-ssm-agent[2108]: 2024/09/04 17:09:58 processing appconfig overrides Sep 4 17:09:58.351338 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:09:58.364492 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO Proxy environment variables: Sep 4 17:09:58.373296 amazon-ssm-agent[2108]: 2024/09/04 17:09:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:09:58.373602 amazon-ssm-agent[2108]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:09:58.376010 amazon-ssm-agent[2108]: 2024/09/04 17:09:58 processing appconfig overrides Sep 4 17:09:58.390808 systemd[1]: Starting sshkeys.service... Sep 4 17:09:58.412254 systemd-logind[2036]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:09:58.469691 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2115) Sep 4 17:09:58.469737 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO https_proxy: Sep 4 17:09:58.412396 systemd-logind[2036]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 4 17:09:58.416981 systemd-logind[2036]: New seat seat0. Sep 4 17:09:58.467015 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:09:58.570253 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO http_proxy: Sep 4 17:09:58.616332 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:09:58.622819 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:09:58.668433 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO no_proxy: Sep 4 17:09:58.768256 locksmithd[2081]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:09:58.776409 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:09:58.869508 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:09:58.970767 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO Agent will take identity from EC2 Sep 4 17:09:59.042100 containerd[2067]: time="2024-09-04T17:09:59.041945783Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:09:59.042462 dbus-daemon[2010]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:09:59.042712 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:09:59.063753 dbus-daemon[2010]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2097 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:09:59.069930 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:09:59.121846 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:09:59.135261 coreos-metadata[2189]: Sep 04 17:09:59.134 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:09:59.152536 coreos-metadata[2189]: Sep 04 17:09:59.152 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:09:59.154001 coreos-metadata[2189]: Sep 04 17:09:59.153 INFO Fetch successful Sep 4 17:09:59.154001 coreos-metadata[2189]: Sep 04 17:09:59.153 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:09:59.155120 coreos-metadata[2189]: Sep 04 17:09:59.154 INFO Fetch successful Sep 4 17:09:59.158712 unknown[2189]: wrote ssh authorized keys file for user: core Sep 4 17:09:59.178337 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:09:59.218154 polkitd[2249]: Started polkitd version 121 Sep 4 17:09:59.235874 update-ssh-keys[2253]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:09:59.238487 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:09:59.256904 systemd[1]: Finished sshkeys.service. Sep 4 17:09:59.280593 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:09:59.295890 polkitd[2249]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:09:59.296027 polkitd[2249]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:09:59.302302 polkitd[2249]: Finished loading, compiling and executing 2 rules Sep 4 17:09:59.303554 dbus-daemon[2010]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:09:59.303821 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:09:59.306234 polkitd[2249]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:09:59.372198 systemd-hostnamed[2097]: Hostname set to (transient) Sep 4 17:09:59.372396 systemd-resolved[1942]: System hostname changed to 'ip-172-31-29-2'. Sep 4 17:09:59.379754 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:09:59.381835 containerd[2067]: time="2024-09-04T17:09:59.381750480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:09:59.381940 containerd[2067]: time="2024-09-04T17:09:59.381844512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:09:59.389134 containerd[2067]: time="2024-09-04T17:09:59.389054892Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:09:59.389134 containerd[2067]: time="2024-09-04T17:09:59.389122848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:09:59.389637 containerd[2067]: time="2024-09-04T17:09:59.389586024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:09:59.389729 containerd[2067]: time="2024-09-04T17:09:59.389634336Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:09:59.389864 containerd[2067]: time="2024-09-04T17:09:59.389824344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:09:59.390000 containerd[2067]: time="2024-09-04T17:09:59.389959908Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:09:59.390053 containerd[2067]: time="2024-09-04T17:09:59.389997024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:09:59.390257 containerd[2067]: time="2024-09-04T17:09:59.390151344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:09:59.392241 containerd[2067]: time="2024-09-04T17:09:59.390576840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:09:59.392241 containerd[2067]: time="2024-09-04T17:09:59.390621444Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:09:59.392241 containerd[2067]: time="2024-09-04T17:09:59.390647844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:09:59.392241 containerd[2067]: time="2024-09-04T17:09:59.390916176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:09:59.392241 containerd[2067]: time="2024-09-04T17:09:59.390948348Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:09:59.392241 containerd[2067]: time="2024-09-04T17:09:59.391059348Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:09:59.392241 containerd[2067]: time="2024-09-04T17:09:59.391086072Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:09:59.408460 containerd[2067]: time="2024-09-04T17:09:59.408392448Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:09:59.408595 containerd[2067]: time="2024-09-04T17:09:59.408466296Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:09:59.408595 containerd[2067]: time="2024-09-04T17:09:59.408499296Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:09:59.408595 containerd[2067]: time="2024-09-04T17:09:59.408563928Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:09:59.408754 containerd[2067]: time="2024-09-04T17:09:59.408599352Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:09:59.408754 containerd[2067]: time="2024-09-04T17:09:59.408624420Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:09:59.408754 containerd[2067]: time="2024-09-04T17:09:59.408655884Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:09:59.408930 containerd[2067]: time="2024-09-04T17:09:59.408889308Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:09:59.409000 containerd[2067]: time="2024-09-04T17:09:59.408933960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:09:59.409000 containerd[2067]: time="2024-09-04T17:09:59.408965844Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:09:59.409170 containerd[2067]: time="2024-09-04T17:09:59.408997152Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:09:59.409170 containerd[2067]: time="2024-09-04T17:09:59.409030284Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:09:59.409170 containerd[2067]: time="2024-09-04T17:09:59.409068648Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:09:59.409170 containerd[2067]: time="2024-09-04T17:09:59.409098672Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:09:59.409170 containerd[2067]: time="2024-09-04T17:09:59.409128924Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:09:59.409170 containerd[2067]: time="2024-09-04T17:09:59.409160520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:09:59.409572 containerd[2067]: time="2024-09-04T17:09:59.409191288Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:09:59.409572 containerd[2067]: time="2024-09-04T17:09:59.409249428Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:09:59.409572 containerd[2067]: time="2024-09-04T17:09:59.409401960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:09:59.411257 containerd[2067]: time="2024-09-04T17:09:59.409717788Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:09:59.415261 containerd[2067]: time="2024-09-04T17:09:59.413711580Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:09:59.415261 containerd[2067]: time="2024-09-04T17:09:59.413817564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.415261 containerd[2067]: time="2024-09-04T17:09:59.413853264Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:09:59.415261 containerd[2067]: time="2024-09-04T17:09:59.413928096Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:09:59.419250 containerd[2067]: time="2024-09-04T17:09:59.417181692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.419250 containerd[2067]: time="2024-09-04T17:09:59.417319056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.419250 containerd[2067]: time="2024-09-04T17:09:59.417363828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.419250 containerd[2067]: time="2024-09-04T17:09:59.417423336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.419250 containerd[2067]: time="2024-09-04T17:09:59.417469284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.419250 containerd[2067]: time="2024-09-04T17:09:59.417512652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.419250 containerd[2067]: time="2024-09-04T17:09:59.417558288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.419250 containerd[2067]: time="2024-09-04T17:09:59.417597552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.419250 containerd[2067]: time="2024-09-04T17:09:59.417649824Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:09:59.431422 containerd[2067]: time="2024-09-04T17:09:59.430652485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.431623 containerd[2067]: time="2024-09-04T17:09:59.431579737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.431881 containerd[2067]: time="2024-09-04T17:09:59.431838421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.433946 containerd[2067]: time="2024-09-04T17:09:59.433896817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.434020 containerd[2067]: time="2024-09-04T17:09:59.433976437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.434068 containerd[2067]: time="2024-09-04T17:09:59.434021269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.434116 containerd[2067]: time="2024-09-04T17:09:59.434082013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.434162 containerd[2067]: time="2024-09-04T17:09:59.434112925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:09:59.437266 containerd[2067]: time="2024-09-04T17:09:59.436203241Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:09:59.437266 containerd[2067]: time="2024-09-04T17:09:59.436392769Z" level=info msg="Connect containerd service" Sep 4 17:09:59.437266 containerd[2067]: time="2024-09-04T17:09:59.436529989Z" level=info msg="using legacy CRI server" Sep 4 17:09:59.437266 containerd[2067]: time="2024-09-04T17:09:59.436551973Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:09:59.441251 containerd[2067]: time="2024-09-04T17:09:59.439302541Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:09:59.442624 containerd[2067]: time="2024-09-04T17:09:59.442064065Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:09:59.442981 containerd[2067]: time="2024-09-04T17:09:59.442696417Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:09:59.446464 containerd[2067]: time="2024-09-04T17:09:59.442854229Z" level=info msg="Start subscribing containerd event" Sep 4 17:09:59.446567 containerd[2067]: time="2024-09-04T17:09:59.446488165Z" level=info msg="Start recovering state" Sep 4 17:09:59.446657 containerd[2067]: time="2024-09-04T17:09:59.446621041Z" level=info msg="Start event monitor" Sep 4 17:09:59.446712 containerd[2067]: time="2024-09-04T17:09:59.446666257Z" level=info msg="Start snapshots syncer" Sep 4 17:09:59.446712 containerd[2067]: time="2024-09-04T17:09:59.446694001Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:09:59.446814 containerd[2067]: time="2024-09-04T17:09:59.446713441Z" level=info msg="Start streaming server" Sep 4 17:09:59.447298 containerd[2067]: time="2024-09-04T17:09:59.447247417Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:09:59.447394 containerd[2067]: time="2024-09-04T17:09:59.447310405Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:09:59.447394 containerd[2067]: time="2024-09-04T17:09:59.447350089Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:09:59.448180 containerd[2067]: time="2024-09-04T17:09:59.448129213Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:09:59.448424 containerd[2067]: time="2024-09-04T17:09:59.448385665Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:09:59.453292 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:09:59.458351 containerd[2067]: time="2024-09-04T17:09:59.458294977Z" level=info msg="containerd successfully booted in 0.454447s" Sep 4 17:09:59.482131 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 4 17:09:59.585238 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:09:59.684309 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:09:59.783205 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO [Registrar] Starting registrar module Sep 4 17:09:59.886352 amazon-ssm-agent[2108]: 2024-09-04 17:09:58 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:09:59.954464 sshd_keygen[2059]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:09:59.983000 amazon-ssm-agent[2108]: 2024-09-04 17:09:59 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:09:59.985623 amazon-ssm-agent[2108]: 2024-09-04 17:09:59 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:09:59.985623 amazon-ssm-agent[2108]: 2024-09-04 17:09:59 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:09:59.985623 amazon-ssm-agent[2108]: 2024-09-04 17:09:59 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:09:59.986902 amazon-ssm-agent[2108]: 2024-09-04 17:09:59 INFO [CredentialRefresher] Next credential rotation will be in 31.3249526342 minutes Sep 4 17:10:00.042545 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:10:00.055794 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:10:00.094889 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:10:00.095800 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:10:00.112763 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:10:00.135588 tar[2057]: linux-arm64/LICENSE Sep 4 17:10:00.135588 tar[2057]: linux-arm64/README.md Sep 4 17:10:00.161159 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:10:00.180602 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:10:00.196609 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:10:00.202128 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:10:00.206060 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:10:00.306582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:00.312194 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:10:00.316333 systemd[1]: Startup finished in 10.641s (kernel) + 9.069s (userspace) = 19.711s. Sep 4 17:10:00.325356 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:10:01.014631 amazon-ssm-agent[2108]: 2024-09-04 17:10:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:10:01.115945 amazon-ssm-agent[2108]: 2024-09-04 17:10:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2317) started Sep 4 17:10:01.217328 amazon-ssm-agent[2108]: 2024-09-04 17:10:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:10:01.360909 kubelet[2306]: E0904 17:10:01.360736 2306 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:10:01.366664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:10:01.367069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:10:04.943121 systemd-resolved[1942]: Clock change detected. Flushing caches. Sep 4 17:10:05.819146 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:10:05.824699 systemd[1]: Started sshd@0-172.31.29.2:22-139.178.89.65:53324.service - OpenSSH per-connection server daemon (139.178.89.65:53324). Sep 4 17:10:06.012488 sshd[2329]: Accepted publickey for core from 139.178.89.65 port 53324 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:06.016053 sshd[2329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:06.037033 systemd-logind[2036]: New session 1 of user core. Sep 4 17:10:06.038363 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:10:06.044758 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:10:06.083990 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:10:06.098013 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:10:06.117932 (systemd)[2335]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:06.332860 systemd[2335]: Queued start job for default target default.target. Sep 4 17:10:06.333966 systemd[2335]: Created slice app.slice - User Application Slice. Sep 4 17:10:06.334008 systemd[2335]: Reached target paths.target - Paths. Sep 4 17:10:06.334038 systemd[2335]: Reached target timers.target - Timers. Sep 4 17:10:06.345419 systemd[2335]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:10:06.358108 systemd[2335]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:10:06.358219 systemd[2335]: Reached target sockets.target - Sockets. Sep 4 17:10:06.358251 systemd[2335]: Reached target basic.target - Basic System. Sep 4 17:10:06.358364 systemd[2335]: Reached target default.target - Main User Target. Sep 4 17:10:06.358426 systemd[2335]: Startup finished in 228ms. Sep 4 17:10:06.358595 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:10:06.368843 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:10:06.519858 systemd[1]: Started sshd@1-172.31.29.2:22-139.178.89.65:53336.service - OpenSSH per-connection server daemon (139.178.89.65:53336). Sep 4 17:10:06.703306 sshd[2347]: Accepted publickey for core from 139.178.89.65 port 53336 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:06.705773 sshd[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:06.714418 systemd-logind[2036]: New session 2 of user core. Sep 4 17:10:06.723738 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:10:06.851596 sshd[2347]: pam_unix(sshd:session): session closed for user core Sep 4 17:10:06.857487 systemd[1]: sshd@1-172.31.29.2:22-139.178.89.65:53336.service: Deactivated successfully. Sep 4 17:10:06.864561 systemd-logind[2036]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:10:06.865945 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:10:06.867567 systemd-logind[2036]: Removed session 2. Sep 4 17:10:06.881798 systemd[1]: Started sshd@2-172.31.29.2:22-139.178.89.65:53348.service - OpenSSH per-connection server daemon (139.178.89.65:53348). Sep 4 17:10:07.060488 sshd[2355]: Accepted publickey for core from 139.178.89.65 port 53348 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:07.062333 sshd[2355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:07.070725 systemd-logind[2036]: New session 3 of user core. Sep 4 17:10:07.082799 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:10:07.204595 sshd[2355]: pam_unix(sshd:session): session closed for user core Sep 4 17:10:07.211506 systemd[1]: sshd@2-172.31.29.2:22-139.178.89.65:53348.service: Deactivated successfully. Sep 4 17:10:07.216169 systemd-logind[2036]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:10:07.217534 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:10:07.220370 systemd-logind[2036]: Removed session 3. Sep 4 17:10:07.233849 systemd[1]: Started sshd@3-172.31.29.2:22-139.178.89.65:53354.service - OpenSSH per-connection server daemon (139.178.89.65:53354). Sep 4 17:10:07.411558 sshd[2363]: Accepted publickey for core from 139.178.89.65 port 53354 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:07.414003 sshd[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:07.422641 systemd-logind[2036]: New session 4 of user core. Sep 4 17:10:07.432878 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:10:07.561540 sshd[2363]: pam_unix(sshd:session): session closed for user core Sep 4 17:10:07.568443 systemd-logind[2036]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:10:07.569625 systemd[1]: sshd@3-172.31.29.2:22-139.178.89.65:53354.service: Deactivated successfully. Sep 4 17:10:07.574384 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:10:07.576102 systemd-logind[2036]: Removed session 4. Sep 4 17:10:07.595773 systemd[1]: Started sshd@4-172.31.29.2:22-139.178.89.65:43486.service - OpenSSH per-connection server daemon (139.178.89.65:43486). Sep 4 17:10:07.772608 sshd[2371]: Accepted publickey for core from 139.178.89.65 port 43486 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:07.775721 sshd[2371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:07.784849 systemd-logind[2036]: New session 5 of user core. Sep 4 17:10:07.794982 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:10:07.920436 sudo[2375]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:10:07.920999 sudo[2375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:10:07.935217 sudo[2375]: pam_unix(sudo:session): session closed for user root Sep 4 17:10:07.960653 sshd[2371]: pam_unix(sshd:session): session closed for user core Sep 4 17:10:07.968356 systemd[1]: sshd@4-172.31.29.2:22-139.178.89.65:43486.service: Deactivated successfully. Sep 4 17:10:07.969574 systemd-logind[2036]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:10:07.974300 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:10:07.975955 systemd-logind[2036]: Removed session 5. Sep 4 17:10:07.988871 systemd[1]: Started sshd@5-172.31.29.2:22-139.178.89.65:43502.service - OpenSSH per-connection server daemon (139.178.89.65:43502). Sep 4 17:10:08.169650 sshd[2380]: Accepted publickey for core from 139.178.89.65 port 43502 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:08.172308 sshd[2380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:08.180141 systemd-logind[2036]: New session 6 of user core. Sep 4 17:10:08.188733 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:10:08.294095 sudo[2385]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:10:08.295224 sudo[2385]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:10:08.301425 sudo[2385]: pam_unix(sudo:session): session closed for user root Sep 4 17:10:08.311314 sudo[2384]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:10:08.312369 sudo[2384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:10:08.344746 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:10:08.347538 auditctl[2388]: No rules Sep 4 17:10:08.350497 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:10:08.351026 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:10:08.360538 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:10:08.403873 augenrules[2407]: No rules Sep 4 17:10:08.407100 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:10:08.411015 sudo[2384]: pam_unix(sudo:session): session closed for user root Sep 4 17:10:08.437234 sshd[2380]: pam_unix(sshd:session): session closed for user core Sep 4 17:10:08.443190 systemd[1]: sshd@5-172.31.29.2:22-139.178.89.65:43502.service: Deactivated successfully. Sep 4 17:10:08.449643 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:10:08.451041 systemd-logind[2036]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:10:08.453209 systemd-logind[2036]: Removed session 6. Sep 4 17:10:08.465785 systemd[1]: Started sshd@6-172.31.29.2:22-139.178.89.65:43516.service - OpenSSH per-connection server daemon (139.178.89.65:43516). Sep 4 17:10:08.646907 sshd[2416]: Accepted publickey for core from 139.178.89.65 port 43516 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:08.649366 sshd[2416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:08.656813 systemd-logind[2036]: New session 7 of user core. Sep 4 17:10:08.666703 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:10:08.773215 sudo[2420]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:10:08.773936 sudo[2420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:10:08.936717 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:10:08.949910 (dockerd)[2430]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:10:09.267565 dockerd[2430]: time="2024-09-04T17:10:09.267462397Z" level=info msg="Starting up" Sep 4 17:10:09.647486 dockerd[2430]: time="2024-09-04T17:10:09.646901811Z" level=info msg="Loading containers: start." Sep 4 17:10:09.800406 kernel: Initializing XFRM netlink socket Sep 4 17:10:09.832696 (udev-worker)[2442]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:10:09.920448 systemd-networkd[1610]: docker0: Link UP Sep 4 17:10:09.943303 dockerd[2430]: time="2024-09-04T17:10:09.942745300Z" level=info msg="Loading containers: done." Sep 4 17:10:10.027107 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck991033919-merged.mount: Deactivated successfully. Sep 4 17:10:10.031611 dockerd[2430]: time="2024-09-04T17:10:10.031508065Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:10:10.031900 dockerd[2430]: time="2024-09-04T17:10:10.031864453Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:10:10.032119 dockerd[2430]: time="2024-09-04T17:10:10.032086825Z" level=info msg="Daemon has completed initialization" Sep 4 17:10:10.082160 dockerd[2430]: time="2024-09-04T17:10:10.080418217Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:10:10.081774 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:10:11.245998 containerd[2067]: time="2024-09-04T17:10:11.245599635Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:10:11.836957 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:10:11.846119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:11.896366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3757056681.mount: Deactivated successfully. Sep 4 17:10:12.258686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:12.265863 (kubelet)[2587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:10:12.393317 kubelet[2587]: E0904 17:10:12.392671 2587 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:10:12.401590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:10:12.401967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:10:14.963721 containerd[2067]: time="2024-09-04T17:10:14.963641529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:14.965801 containerd[2067]: time="2024-09-04T17:10:14.965739681Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=31599022" Sep 4 17:10:14.967599 containerd[2067]: time="2024-09-04T17:10:14.967517313Z" level=info msg="ImageCreate event name:\"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:14.974220 containerd[2067]: time="2024-09-04T17:10:14.974124981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:14.976731 containerd[2067]: time="2024-09-04T17:10:14.976347597Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"31595822\" in 3.730682178s" Sep 4 17:10:14.976731 containerd[2067]: time="2024-09-04T17:10:14.976406289Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\"" Sep 4 17:10:15.017791 containerd[2067]: time="2024-09-04T17:10:15.017744298Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:10:18.779317 containerd[2067]: time="2024-09-04T17:10:18.779174436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:18.781330 containerd[2067]: time="2024-09-04T17:10:18.781275660Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=29019496" Sep 4 17:10:18.785397 containerd[2067]: time="2024-09-04T17:10:18.785315340Z" level=info msg="ImageCreate event name:\"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:18.791928 containerd[2067]: time="2024-09-04T17:10:18.791860332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:18.793621 containerd[2067]: time="2024-09-04T17:10:18.793451160Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"30506763\" in 3.77547111s" Sep 4 17:10:18.793621 containerd[2067]: time="2024-09-04T17:10:18.793506984Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\"" Sep 4 17:10:18.834324 containerd[2067]: time="2024-09-04T17:10:18.834241909Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:10:20.816592 containerd[2067]: time="2024-09-04T17:10:20.816511994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:20.818744 containerd[2067]: time="2024-09-04T17:10:20.818672126Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=15533681" Sep 4 17:10:20.820041 containerd[2067]: time="2024-09-04T17:10:20.819953594Z" level=info msg="ImageCreate event name:\"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:20.825712 containerd[2067]: time="2024-09-04T17:10:20.825632294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:20.828160 containerd[2067]: time="2024-09-04T17:10:20.827966618Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"17020966\" in 1.993642125s" Sep 4 17:10:20.828160 containerd[2067]: time="2024-09-04T17:10:20.828028178Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\"" Sep 4 17:10:20.865806 containerd[2067]: time="2024-09-04T17:10:20.865747287Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:10:22.188645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762053792.mount: Deactivated successfully. Sep 4 17:10:22.403563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:10:22.411645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:22.819521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:22.838067 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:10:22.965801 kubelet[2678]: E0904 17:10:22.965725 2678 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:10:22.972849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:10:22.975253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:10:23.278762 containerd[2067]: time="2024-09-04T17:10:23.277626555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:23.279912 containerd[2067]: time="2024-09-04T17:10:23.279830187Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=24977930" Sep 4 17:10:23.281535 containerd[2067]: time="2024-09-04T17:10:23.281441427Z" level=info msg="ImageCreate event name:\"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:23.285293 containerd[2067]: time="2024-09-04T17:10:23.285174315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:23.287230 containerd[2067]: time="2024-09-04T17:10:23.286863651Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"24976949\" in 2.421052836s" Sep 4 17:10:23.287230 containerd[2067]: time="2024-09-04T17:10:23.286932951Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\"" Sep 4 17:10:23.334239 containerd[2067]: time="2024-09-04T17:10:23.334189119Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:10:23.804413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649503884.mount: Deactivated successfully. Sep 4 17:10:23.814473 containerd[2067]: time="2024-09-04T17:10:23.813492089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:23.815828 containerd[2067]: time="2024-09-04T17:10:23.815404421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Sep 4 17:10:23.817117 containerd[2067]: time="2024-09-04T17:10:23.816992789Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:23.821845 containerd[2067]: time="2024-09-04T17:10:23.821733365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:23.824017 containerd[2067]: time="2024-09-04T17:10:23.823795001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 489.31763ms" Sep 4 17:10:23.824017 containerd[2067]: time="2024-09-04T17:10:23.823869257Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:10:23.864589 containerd[2067]: time="2024-09-04T17:10:23.864410093Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:10:24.467458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482813774.mount: Deactivated successfully. Sep 4 17:10:28.759332 containerd[2067]: time="2024-09-04T17:10:28.759186766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:28.761484 containerd[2067]: time="2024-09-04T17:10:28.761430442Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Sep 4 17:10:28.762304 containerd[2067]: time="2024-09-04T17:10:28.761858434Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:28.768101 containerd[2067]: time="2024-09-04T17:10:28.768012454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:28.770972 containerd[2067]: time="2024-09-04T17:10:28.770720386Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 4.906253653s" Sep 4 17:10:28.770972 containerd[2067]: time="2024-09-04T17:10:28.770784550Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Sep 4 17:10:28.810381 containerd[2067]: time="2024-09-04T17:10:28.810311518Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:10:29.466186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2383961209.mount: Deactivated successfully. Sep 4 17:10:29.700924 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:10:30.039678 containerd[2067]: time="2024-09-04T17:10:30.039614204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:30.041189 containerd[2067]: time="2024-09-04T17:10:30.041136800Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Sep 4 17:10:30.042114 containerd[2067]: time="2024-09-04T17:10:30.042031376Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:30.046502 containerd[2067]: time="2024-09-04T17:10:30.046443080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:30.048573 containerd[2067]: time="2024-09-04T17:10:30.048387176Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.23800619s" Sep 4 17:10:30.048573 containerd[2067]: time="2024-09-04T17:10:30.048449996Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Sep 4 17:10:33.154244 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:10:33.166217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:33.520833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:33.532073 (kubelet)[2834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:10:33.637779 kubelet[2834]: E0904 17:10:33.637702 2834 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:10:33.642079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:10:33.642539 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:10:39.380687 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:39.388736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:39.430406 systemd[1]: Reloading requested from client PID 2850 ('systemctl') (unit session-7.scope)... Sep 4 17:10:39.430433 systemd[1]: Reloading... Sep 4 17:10:39.655979 zram_generator::config[2888]: No configuration found. Sep 4 17:10:39.906286 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:10:40.064667 systemd[1]: Reloading finished in 633 ms. Sep 4 17:10:40.136505 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:10:40.136695 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:10:40.137319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:40.146000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:40.460672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:40.477981 (kubelet)[2960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:10:40.557719 kubelet[2960]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:10:40.557719 kubelet[2960]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:10:40.557719 kubelet[2960]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:10:40.558347 kubelet[2960]: I0904 17:10:40.557850 2960 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:10:41.966331 kubelet[2960]: I0904 17:10:41.965758 2960 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:10:41.966331 kubelet[2960]: I0904 17:10:41.965800 2960 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:10:41.966331 kubelet[2960]: I0904 17:10:41.966120 2960 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:10:41.996527 kubelet[2960]: I0904 17:10:41.995492 2960 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:10:41.998114 kubelet[2960]: E0904 17:10:41.998005 2960 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:42.009364 kubelet[2960]: W0904 17:10:42.009309 2960 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:10:42.010601 kubelet[2960]: I0904 17:10:42.010553 2960 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:10:42.011237 kubelet[2960]: I0904 17:10:42.011192 2960 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:10:42.011577 kubelet[2960]: I0904 17:10:42.011520 2960 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:10:42.011745 kubelet[2960]: I0904 17:10:42.011587 2960 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:10:42.011745 kubelet[2960]: I0904 17:10:42.011609 2960 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:10:42.011867 kubelet[2960]: I0904 17:10:42.011806 2960 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:10:42.014853 kubelet[2960]: I0904 17:10:42.014806 2960 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:10:42.014853 kubelet[2960]: I0904 17:10:42.014856 2960 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:10:42.015012 kubelet[2960]: I0904 17:10:42.014930 2960 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:10:42.015012 kubelet[2960]: I0904 17:10:42.014957 2960 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:10:42.017234 kubelet[2960]: W0904 17:10:42.017171 2960 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.29.2:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:42.017352 kubelet[2960]: E0904 17:10:42.017245 2960 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.2:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:42.018011 kubelet[2960]: W0904 17:10:42.017937 2960 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.29.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-2&limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:42.018129 kubelet[2960]: E0904 17:10:42.018017 2960 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-2&limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:42.018197 kubelet[2960]: I0904 17:10:42.018179 2960 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:10:42.023381 kubelet[2960]: W0904 17:10:42.023325 2960 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:10:42.027639 kubelet[2960]: I0904 17:10:42.027594 2960 server.go:1232] "Started kubelet" Sep 4 17:10:42.029457 kubelet[2960]: I0904 17:10:42.029415 2960 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:10:42.030673 kubelet[2960]: I0904 17:10:42.030621 2960 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:10:42.032371 kubelet[2960]: I0904 17:10:42.032335 2960 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:10:42.033419 kubelet[2960]: I0904 17:10:42.033380 2960 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:10:42.034938 kubelet[2960]: I0904 17:10:42.034855 2960 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:10:42.040433 kubelet[2960]: I0904 17:10:42.040352 2960 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:10:42.040893 kubelet[2960]: I0904 17:10:42.040602 2960 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:10:42.040893 kubelet[2960]: I0904 17:10:42.040712 2960 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:10:42.041631 kubelet[2960]: E0904 17:10:42.041185 2960 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-29-2.17f219adecb3a884", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-29-2", UID:"ip-172-31-29-2", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-29-2"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 10, 42, 27554948, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 10, 42, 27554948, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-29-2"}': 'Post "https://172.31.29.2:6443/api/v1/namespaces/default/events": dial tcp 172.31.29.2:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:10:42.042839 kubelet[2960]: E0904 17:10:42.042126 2960 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-2?timeout=10s\": dial tcp 172.31.29.2:6443: connect: connection refused" interval="200ms" Sep 4 17:10:42.042839 kubelet[2960]: W0904 17:10:42.042684 2960 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.29.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:42.043021 kubelet[2960]: E0904 17:10:42.042878 2960 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:42.045616 kubelet[2960]: E0904 17:10:42.045553 2960 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:10:42.045616 kubelet[2960]: E0904 17:10:42.045615 2960 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:10:42.079783 kubelet[2960]: I0904 17:10:42.079688 2960 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:10:42.087781 kubelet[2960]: I0904 17:10:42.086221 2960 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:10:42.087781 kubelet[2960]: I0904 17:10:42.087398 2960 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:10:42.087781 kubelet[2960]: I0904 17:10:42.087458 2960 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:10:42.087781 kubelet[2960]: E0904 17:10:42.087578 2960 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:10:42.090393 kubelet[2960]: W0904 17:10:42.090129 2960 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.29.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:42.090393 kubelet[2960]: E0904 17:10:42.090359 2960 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:42.145991 kubelet[2960]: I0904 17:10:42.145937 2960 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-2" Sep 4 17:10:42.146746 kubelet[2960]: E0904 17:10:42.146663 2960 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.2:6443/api/v1/nodes\": dial tcp 172.31.29.2:6443: connect: connection refused" node="ip-172-31-29-2" Sep 4 17:10:42.149821 kubelet[2960]: I0904 17:10:42.149763 2960 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:10:42.149821 kubelet[2960]: I0904 17:10:42.149801 2960 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:10:42.150012 kubelet[2960]: I0904 17:10:42.149834 2960 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:10:42.151915 kubelet[2960]: I0904 17:10:42.151879 2960 policy_none.go:49] "None policy: Start" Sep 4 17:10:42.153487 kubelet[2960]: I0904 17:10:42.153396 2960 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:10:42.153487 kubelet[2960]: I0904 17:10:42.153444 2960 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:10:42.161373 kubelet[2960]: I0904 17:10:42.161114 2960 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:10:42.161685 kubelet[2960]: I0904 17:10:42.161550 2960 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:10:42.166484 kubelet[2960]: E0904 17:10:42.166408 2960 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-2\" not found" Sep 4 17:10:42.187987 kubelet[2960]: I0904 17:10:42.187921 2960 topology_manager.go:215] "Topology Admit Handler" podUID="73f9f0da602050afff71be161564023b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-2" Sep 4 17:10:42.190125 kubelet[2960]: I0904 17:10:42.189798 2960 topology_manager.go:215] "Topology Admit Handler" podUID="8ef9f090618530676d3dd267f8d19cf6" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-2" Sep 4 17:10:42.192529 kubelet[2960]: I0904 17:10:42.192471 2960 topology_manager.go:215] "Topology Admit Handler" podUID="6a4f5bc25548c45bcd2834f9e6cc361a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-2" Sep 4 17:10:42.243477 kubelet[2960]: E0904 17:10:42.242836 2960 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-2?timeout=10s\": dial tcp 172.31.29.2:6443: connect: connection refused" interval="400ms" Sep 4 17:10:42.342372 kubelet[2960]: I0904 17:10:42.342294 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ef9f090618530676d3dd267f8d19cf6-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-2\" (UID: \"8ef9f090618530676d3dd267f8d19cf6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-2" Sep 4 17:10:42.342372 kubelet[2960]: I0904 17:10:42.342366 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ef9f090618530676d3dd267f8d19cf6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-2\" (UID: \"8ef9f090618530676d3dd267f8d19cf6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-2" Sep 4 17:10:42.342781 kubelet[2960]: I0904 17:10:42.342417 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ef9f090618530676d3dd267f8d19cf6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-2\" (UID: \"8ef9f090618530676d3dd267f8d19cf6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-2" Sep 4 17:10:42.342781 kubelet[2960]: I0904 17:10:42.342462 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a4f5bc25548c45bcd2834f9e6cc361a-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-2\" (UID: \"6a4f5bc25548c45bcd2834f9e6cc361a\") " pod="kube-system/kube-scheduler-ip-172-31-29-2" Sep 4 17:10:42.342781 kubelet[2960]: I0904 17:10:42.342507 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f9f0da602050afff71be161564023b-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-2\" (UID: \"73f9f0da602050afff71be161564023b\") " pod="kube-system/kube-apiserver-ip-172-31-29-2" Sep 4 17:10:42.342781 kubelet[2960]: I0904 17:10:42.342554 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f9f0da602050afff71be161564023b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-2\" (UID: \"73f9f0da602050afff71be161564023b\") " pod="kube-system/kube-apiserver-ip-172-31-29-2" Sep 4 17:10:42.342781 kubelet[2960]: I0904 17:10:42.342602 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ef9f090618530676d3dd267f8d19cf6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-2\" (UID: \"8ef9f090618530676d3dd267f8d19cf6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-2" Sep 4 17:10:42.343068 kubelet[2960]: I0904 17:10:42.342645 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f9f0da602050afff71be161564023b-ca-certs\") pod \"kube-apiserver-ip-172-31-29-2\" (UID: \"73f9f0da602050afff71be161564023b\") " pod="kube-system/kube-apiserver-ip-172-31-29-2" Sep 4 17:10:42.343068 kubelet[2960]: I0904 17:10:42.342688 2960 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8ef9f090618530676d3dd267f8d19cf6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-2\" (UID: \"8ef9f090618530676d3dd267f8d19cf6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-2" Sep 4 17:10:42.349126 kubelet[2960]: I0904 17:10:42.349018 2960 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-2" Sep 4 17:10:42.349581 kubelet[2960]: E0904 17:10:42.349537 2960 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.2:6443/api/v1/nodes\": dial tcp 172.31.29.2:6443: connect: connection refused" node="ip-172-31-29-2" Sep 4 17:10:42.508871 containerd[2067]: time="2024-09-04T17:10:42.508680958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-2,Uid:73f9f0da602050afff71be161564023b,Namespace:kube-system,Attempt:0,}" Sep 4 17:10:42.510640 containerd[2067]: time="2024-09-04T17:10:42.510427162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-2,Uid:8ef9f090618530676d3dd267f8d19cf6,Namespace:kube-system,Attempt:0,}" Sep 4 17:10:42.520798 containerd[2067]: time="2024-09-04T17:10:42.520634470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-2,Uid:6a4f5bc25548c45bcd2834f9e6cc361a,Namespace:kube-system,Attempt:0,}" Sep 4 17:10:42.644426 kubelet[2960]: E0904 17:10:42.644378 2960 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-2?timeout=10s\": dial tcp 172.31.29.2:6443: connect: connection refused" interval="800ms" Sep 4 17:10:42.751753 kubelet[2960]: I0904 17:10:42.751655 2960 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-2" Sep 4 17:10:42.752155 kubelet[2960]: E0904 17:10:42.752123 2960 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.2:6443/api/v1/nodes\": dial tcp 172.31.29.2:6443: connect: connection refused" node="ip-172-31-29-2" Sep 4 17:10:42.903316 update_engine[2046]: I0904 17:10:42.902688 2046 update_attempter.cc:509] Updating boot flags... Sep 4 17:10:42.979427 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3004) Sep 4 17:10:43.024753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113018073.mount: Deactivated successfully. Sep 4 17:10:43.037999 containerd[2067]: time="2024-09-04T17:10:43.036938217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:10:43.042465 containerd[2067]: time="2024-09-04T17:10:43.042409233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 17:10:43.045380 containerd[2067]: time="2024-09-04T17:10:43.045320241Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:10:43.048564 containerd[2067]: time="2024-09-04T17:10:43.048467613Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:10:43.053976 containerd[2067]: time="2024-09-04T17:10:43.052346385Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:10:43.062562 containerd[2067]: time="2024-09-04T17:10:43.061790217Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:10:43.062562 containerd[2067]: time="2024-09-04T17:10:43.062403261Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:10:43.067388 containerd[2067]: time="2024-09-04T17:10:43.067050345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 558.226707ms" Sep 4 17:10:43.082314 containerd[2067]: time="2024-09-04T17:10:43.080247525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:10:43.084078 containerd[2067]: time="2024-09-04T17:10:43.083996217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.215287ms" Sep 4 17:10:43.109296 containerd[2067]: time="2024-09-04T17:10:43.106350777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 595.514895ms" Sep 4 17:10:43.245736 kubelet[2960]: W0904 17:10:43.245529 2960 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.29.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:43.251520 kubelet[2960]: E0904 17:10:43.251400 2960 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:43.295327 kubelet[2960]: W0904 17:10:43.291849 2960 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.29.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-2&limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:43.295327 kubelet[2960]: E0904 17:10:43.291947 2960 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-2&limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:43.347874 kubelet[2960]: W0904 17:10:43.347798 2960 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.29.2:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:43.348515 kubelet[2960]: E0904 17:10:43.348333 2960 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.2:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:43.371325 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3005) Sep 4 17:10:43.411388 containerd[2067]: time="2024-09-04T17:10:43.411212039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:10:43.411388 containerd[2067]: time="2024-09-04T17:10:43.411329291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:10:43.414278 containerd[2067]: time="2024-09-04T17:10:43.411389543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:10:43.414278 containerd[2067]: time="2024-09-04T17:10:43.411425339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:10:43.419848 containerd[2067]: time="2024-09-04T17:10:43.416744795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:10:43.420161 containerd[2067]: time="2024-09-04T17:10:43.419055983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:10:43.420161 containerd[2067]: time="2024-09-04T17:10:43.419954603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:10:43.420161 containerd[2067]: time="2024-09-04T17:10:43.419987207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:10:43.444179 kubelet[2960]: W0904 17:10:43.443734 2960 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.29.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:43.444179 kubelet[2960]: E0904 17:10:43.443833 2960 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:43.445144 kubelet[2960]: E0904 17:10:43.445104 2960 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-2?timeout=10s\": dial tcp 172.31.29.2:6443: connect: connection refused" interval="1.6s" Sep 4 17:10:43.450313 containerd[2067]: time="2024-09-04T17:10:43.448686815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:10:43.450313 containerd[2067]: time="2024-09-04T17:10:43.448779239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:10:43.450313 containerd[2067]: time="2024-09-04T17:10:43.448820435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:10:43.450313 containerd[2067]: time="2024-09-04T17:10:43.448854647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:10:43.565699 kubelet[2960]: I0904 17:10:43.563219 2960 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-2" Sep 4 17:10:43.565699 kubelet[2960]: E0904 17:10:43.563742 2960 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.2:6443/api/v1/nodes\": dial tcp 172.31.29.2:6443: connect: connection refused" node="ip-172-31-29-2" Sep 4 17:10:43.702284 containerd[2067]: time="2024-09-04T17:10:43.701617968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-2,Uid:73f9f0da602050afff71be161564023b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9017c925220e22ee26b98184c8d91c11587c0b6bc4d8b312547f8449c72b5be9\"" Sep 4 17:10:43.742564 containerd[2067]: time="2024-09-04T17:10:43.741842676Z" level=info msg="CreateContainer within sandbox \"9017c925220e22ee26b98184c8d91c11587c0b6bc4d8b312547f8449c72b5be9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:10:43.782540 containerd[2067]: time="2024-09-04T17:10:43.782452260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-2,Uid:8ef9f090618530676d3dd267f8d19cf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"576a3606c152e43508f2c3bbeaef85fc9048b04192b523a72b838bd8e0ab0ae9\"" Sep 4 17:10:43.792655 containerd[2067]: time="2024-09-04T17:10:43.792303132Z" level=info msg="CreateContainer within sandbox \"576a3606c152e43508f2c3bbeaef85fc9048b04192b523a72b838bd8e0ab0ae9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:10:43.793305 containerd[2067]: time="2024-09-04T17:10:43.793088400Z" level=info msg="CreateContainer within sandbox \"9017c925220e22ee26b98184c8d91c11587c0b6bc4d8b312547f8449c72b5be9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b0f28e6388f6e66b05729fdb4cf919e2f0e152ea2a81310f5608a2a617ffbc2a\"" Sep 4 17:10:43.796817 containerd[2067]: time="2024-09-04T17:10:43.796767252Z" level=info msg="StartContainer for \"b0f28e6388f6e66b05729fdb4cf919e2f0e152ea2a81310f5608a2a617ffbc2a\"" Sep 4 17:10:43.805045 containerd[2067]: time="2024-09-04T17:10:43.804987577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-2,Uid:6a4f5bc25548c45bcd2834f9e6cc361a,Namespace:kube-system,Attempt:0,} returns sandbox id \"30974b339cd23c86f70b299879676055b90ca6f0a8f96abe74a9776cb1f0c969\"" Sep 4 17:10:43.812157 containerd[2067]: time="2024-09-04T17:10:43.811946761Z" level=info msg="CreateContainer within sandbox \"30974b339cd23c86f70b299879676055b90ca6f0a8f96abe74a9776cb1f0c969\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:10:43.815235 containerd[2067]: time="2024-09-04T17:10:43.815175193Z" level=info msg="CreateContainer within sandbox \"576a3606c152e43508f2c3bbeaef85fc9048b04192b523a72b838bd8e0ab0ae9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2a23235cc72fe9b3db8fa9d790c04cef6c46c53ddea6807999e291f38cff2588\"" Sep 4 17:10:43.816522 containerd[2067]: time="2024-09-04T17:10:43.816375781Z" level=info msg="StartContainer for \"2a23235cc72fe9b3db8fa9d790c04cef6c46c53ddea6807999e291f38cff2588\"" Sep 4 17:10:43.843813 containerd[2067]: time="2024-09-04T17:10:43.842722357Z" level=info msg="CreateContainer within sandbox \"30974b339cd23c86f70b299879676055b90ca6f0a8f96abe74a9776cb1f0c969\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"618bbf52731944b5f5aeca81b9b90c89dfe157916171cd9064e884ff2374bb3b\"" Sep 4 17:10:43.846152 containerd[2067]: time="2024-09-04T17:10:43.846092353Z" level=info msg="StartContainer for \"618bbf52731944b5f5aeca81b9b90c89dfe157916171cd9064e884ff2374bb3b\"" Sep 4 17:10:43.991789 containerd[2067]: time="2024-09-04T17:10:43.991356817Z" level=info msg="StartContainer for \"b0f28e6388f6e66b05729fdb4cf919e2f0e152ea2a81310f5608a2a617ffbc2a\" returns successfully" Sep 4 17:10:44.081664 containerd[2067]: time="2024-09-04T17:10:44.081485122Z" level=info msg="StartContainer for \"2a23235cc72fe9b3db8fa9d790c04cef6c46c53ddea6807999e291f38cff2588\" returns successfully" Sep 4 17:10:44.086524 kubelet[2960]: E0904 17:10:44.086486 2960 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.2:6443: connect: connection refused Sep 4 17:10:44.091310 containerd[2067]: time="2024-09-04T17:10:44.090802258Z" level=info msg="StartContainer for \"618bbf52731944b5f5aeca81b9b90c89dfe157916171cd9064e884ff2374bb3b\" returns successfully" Sep 4 17:10:45.169353 kubelet[2960]: I0904 17:10:45.166667 2960 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-2" Sep 4 17:10:47.932928 kubelet[2960]: E0904 17:10:47.932870 2960 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-2\" not found" node="ip-172-31-29-2" Sep 4 17:10:47.938329 kubelet[2960]: I0904 17:10:47.936044 2960 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-29-2" Sep 4 17:10:48.019718 kubelet[2960]: I0904 17:10:48.019636 2960 apiserver.go:52] "Watching apiserver" Sep 4 17:10:48.040975 kubelet[2960]: I0904 17:10:48.040901 2960 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:10:50.634167 systemd[1]: Reloading requested from client PID 3423 ('systemctl') (unit session-7.scope)... Sep 4 17:10:50.634901 systemd[1]: Reloading... Sep 4 17:10:50.815311 zram_generator::config[3461]: No configuration found. Sep 4 17:10:51.060769 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:10:51.238025 systemd[1]: Reloading finished in 602 ms. Sep 4 17:10:51.299049 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:51.300036 kubelet[2960]: I0904 17:10:51.299096 2960 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:10:51.315929 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:10:51.316640 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:51.328020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:51.659615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:51.676100 (kubelet)[3531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:10:51.798316 kubelet[3531]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:10:51.798316 kubelet[3531]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:10:51.798316 kubelet[3531]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:10:51.798316 kubelet[3531]: I0904 17:10:51.798005 3531 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:10:51.807650 kubelet[3531]: I0904 17:10:51.807295 3531 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:10:51.807650 kubelet[3531]: I0904 17:10:51.807352 3531 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:10:51.807860 kubelet[3531]: I0904 17:10:51.807744 3531 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:10:51.811277 kubelet[3531]: I0904 17:10:51.811192 3531 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:10:51.814318 kubelet[3531]: I0904 17:10:51.813632 3531 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:10:51.831290 kubelet[3531]: W0904 17:10:51.831226 3531 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:10:51.833116 kubelet[3531]: I0904 17:10:51.833069 3531 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:10:51.834307 kubelet[3531]: I0904 17:10:51.834063 3531 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:10:51.834814 kubelet[3531]: I0904 17:10:51.834437 3531 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:10:51.834814 kubelet[3531]: I0904 17:10:51.834524 3531 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:10:51.834814 kubelet[3531]: I0904 17:10:51.834547 3531 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:10:51.834814 kubelet[3531]: I0904 17:10:51.834618 3531 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:10:51.834814 kubelet[3531]: I0904 17:10:51.834805 3531 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:10:51.837831 kubelet[3531]: I0904 17:10:51.834840 3531 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:10:51.837831 kubelet[3531]: I0904 17:10:51.834892 3531 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:10:51.837831 kubelet[3531]: I0904 17:10:51.834917 3531 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:10:51.851143 kubelet[3531]: I0904 17:10:51.851089 3531 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:10:51.852725 kubelet[3531]: I0904 17:10:51.851981 3531 server.go:1232] "Started kubelet" Sep 4 17:10:51.862766 kubelet[3531]: I0904 17:10:51.862710 3531 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:10:51.873587 kubelet[3531]: I0904 17:10:51.873542 3531 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:10:51.881763 kubelet[3531]: I0904 17:10:51.881717 3531 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:10:51.886470 kubelet[3531]: I0904 17:10:51.886432 3531 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:10:51.887327 kubelet[3531]: I0904 17:10:51.886950 3531 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:10:51.890615 kubelet[3531]: I0904 17:10:51.890577 3531 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:10:51.891533 kubelet[3531]: I0904 17:10:51.891471 3531 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:10:51.892216 kubelet[3531]: I0904 17:10:51.891944 3531 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:10:51.894067 kubelet[3531]: E0904 17:10:51.893520 3531 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:10:51.894396 kubelet[3531]: E0904 17:10:51.894319 3531 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:10:51.907890 kubelet[3531]: I0904 17:10:51.907693 3531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:10:51.910049 kubelet[3531]: I0904 17:10:51.909932 3531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:10:51.910743 kubelet[3531]: I0904 17:10:51.910199 3531 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:10:51.910743 kubelet[3531]: I0904 17:10:51.910244 3531 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:10:51.910743 kubelet[3531]: E0904 17:10:51.910367 3531 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:10:52.007729 kubelet[3531]: I0904 17:10:52.007695 3531 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-2" Sep 4 17:10:52.011216 kubelet[3531]: E0904 17:10:52.011019 3531 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:10:52.021655 kubelet[3531]: I0904 17:10:52.021326 3531 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-29-2" Sep 4 17:10:52.021655 kubelet[3531]: I0904 17:10:52.021445 3531 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-29-2" Sep 4 17:10:52.172103 kubelet[3531]: I0904 17:10:52.171958 3531 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:10:52.172103 kubelet[3531]: I0904 17:10:52.171999 3531 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:10:52.172103 kubelet[3531]: I0904 17:10:52.172034 3531 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:10:52.172352 kubelet[3531]: I0904 17:10:52.172339 3531 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:10:52.172460 kubelet[3531]: I0904 17:10:52.172383 3531 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:10:52.172460 kubelet[3531]: I0904 17:10:52.172401 3531 policy_none.go:49] "None policy: Start" Sep 4 17:10:52.175320 kubelet[3531]: I0904 17:10:52.174991 3531 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:10:52.175320 kubelet[3531]: I0904 17:10:52.175066 3531 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:10:52.175759 kubelet[3531]: I0904 17:10:52.175717 3531 state_mem.go:75] "Updated machine memory state" Sep 4 17:10:52.179388 kubelet[3531]: I0904 17:10:52.179354 3531 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:10:52.183474 kubelet[3531]: I0904 17:10:52.182549 3531 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:10:52.213463 kubelet[3531]: I0904 17:10:52.213234 3531 topology_manager.go:215] "Topology Admit Handler" podUID="73f9f0da602050afff71be161564023b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-2" Sep 4 17:10:52.213463 kubelet[3531]: I0904 17:10:52.213414 3531 topology_manager.go:215] "Topology Admit Handler" podUID="8ef9f090618530676d3dd267f8d19cf6" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-2" Sep 4 17:10:52.213635 kubelet[3531]: I0904 17:10:52.213489 3531 topology_manager.go:215] "Topology Admit Handler" podUID="6a4f5bc25548c45bcd2834f9e6cc361a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-2" Sep 4 17:10:52.232248 kubelet[3531]: E0904 17:10:52.232201 3531 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-2\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-2" Sep 4 17:10:52.292998 kubelet[3531]: I0904 17:10:52.292806 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a4f5bc25548c45bcd2834f9e6cc361a-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-2\" (UID: \"6a4f5bc25548c45bcd2834f9e6cc361a\") " pod="kube-system/kube-scheduler-ip-172-31-29-2" Sep 4 17:10:52.292998 kubelet[3531]: I0904 17:10:52.292868 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f9f0da602050afff71be161564023b-ca-certs\") pod \"kube-apiserver-ip-172-31-29-2\" (UID: \"73f9f0da602050afff71be161564023b\") " pod="kube-system/kube-apiserver-ip-172-31-29-2" Sep 4 17:10:52.292998 kubelet[3531]: I0904 17:10:52.292913 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ef9f090618530676d3dd267f8d19cf6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-2\" (UID: \"8ef9f090618530676d3dd267f8d19cf6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-2" Sep 4 17:10:52.292998 kubelet[3531]: I0904 17:10:52.292965 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ef9f090618530676d3dd267f8d19cf6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-2\" (UID: \"8ef9f090618530676d3dd267f8d19cf6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-2" Sep 4 17:10:52.293599 kubelet[3531]: I0904 17:10:52.293033 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f9f0da602050afff71be161564023b-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-2\" (UID: \"73f9f0da602050afff71be161564023b\") " pod="kube-system/kube-apiserver-ip-172-31-29-2" Sep 4 17:10:52.293599 kubelet[3531]: I0904 17:10:52.293090 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f9f0da602050afff71be161564023b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-2\" (UID: \"73f9f0da602050afff71be161564023b\") " pod="kube-system/kube-apiserver-ip-172-31-29-2" Sep 4 17:10:52.293599 kubelet[3531]: I0904 17:10:52.293145 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ef9f090618530676d3dd267f8d19cf6-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-2\" (UID: \"8ef9f090618530676d3dd267f8d19cf6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-2" Sep 4 17:10:52.293599 kubelet[3531]: I0904 17:10:52.293225 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8ef9f090618530676d3dd267f8d19cf6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-2\" (UID: \"8ef9f090618530676d3dd267f8d19cf6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-2" Sep 4 17:10:52.293599 kubelet[3531]: I0904 17:10:52.293295 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ef9f090618530676d3dd267f8d19cf6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-2\" (UID: \"8ef9f090618530676d3dd267f8d19cf6\") " pod="kube-system/kube-controller-manager-ip-172-31-29-2" Sep 4 17:10:52.836792 kubelet[3531]: I0904 17:10:52.836730 3531 apiserver.go:52] "Watching apiserver" Sep 4 17:10:52.891940 kubelet[3531]: I0904 17:10:52.891871 3531 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:10:53.068178 kubelet[3531]: I0904 17:10:53.066838 3531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-2" podStartSLOduration=1.066739303 podCreationTimestamp="2024-09-04 17:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:10:53.052303098 +0000 UTC m=+1.364494735" watchObservedRunningTime="2024-09-04 17:10:53.066739303 +0000 UTC m=+1.378930952" Sep 4 17:10:53.089859 kubelet[3531]: I0904 17:10:53.089411 3531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-2" podStartSLOduration=3.089356183 podCreationTimestamp="2024-09-04 17:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:10:53.068791651 +0000 UTC m=+1.380983276" watchObservedRunningTime="2024-09-04 17:10:53.089356183 +0000 UTC m=+1.401547808" Sep 4 17:10:53.115183 kubelet[3531]: I0904 17:10:53.114953 3531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-2" podStartSLOduration=1.114898171 podCreationTimestamp="2024-09-04 17:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:10:53.091251667 +0000 UTC m=+1.403443316" watchObservedRunningTime="2024-09-04 17:10:53.114898171 +0000 UTC m=+1.427089820" Sep 4 17:10:58.827391 sudo[2420]: pam_unix(sudo:session): session closed for user root Sep 4 17:10:58.851928 sshd[2416]: pam_unix(sshd:session): session closed for user core Sep 4 17:10:58.860537 systemd[1]: sshd@6-172.31.29.2:22-139.178.89.65:43516.service: Deactivated successfully. Sep 4 17:10:58.866987 systemd-logind[2036]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:10:58.868886 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:10:58.871383 systemd-logind[2036]: Removed session 7. Sep 4 17:11:04.381139 kubelet[3531]: I0904 17:11:04.381052 3531 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:11:04.384303 containerd[2067]: time="2024-09-04T17:11:04.381905683Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:11:04.384999 kubelet[3531]: I0904 17:11:04.382368 3531 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:11:05.182731 kubelet[3531]: I0904 17:11:05.181720 3531 topology_manager.go:215] "Topology Admit Handler" podUID="164c9cd3-25b1-4886-be04-3cc068ab8eb4" podNamespace="kube-system" podName="kube-proxy-8nzf2" Sep 4 17:11:05.282369 kubelet[3531]: I0904 17:11:05.280327 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/164c9cd3-25b1-4886-be04-3cc068ab8eb4-lib-modules\") pod \"kube-proxy-8nzf2\" (UID: \"164c9cd3-25b1-4886-be04-3cc068ab8eb4\") " pod="kube-system/kube-proxy-8nzf2" Sep 4 17:11:05.282811 kubelet[3531]: I0904 17:11:05.282732 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/164c9cd3-25b1-4886-be04-3cc068ab8eb4-kube-proxy\") pod \"kube-proxy-8nzf2\" (UID: \"164c9cd3-25b1-4886-be04-3cc068ab8eb4\") " pod="kube-system/kube-proxy-8nzf2" Sep 4 17:11:05.283009 kubelet[3531]: I0904 17:11:05.282891 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fplq9\" (UniqueName: \"kubernetes.io/projected/164c9cd3-25b1-4886-be04-3cc068ab8eb4-kube-api-access-fplq9\") pod \"kube-proxy-8nzf2\" (UID: \"164c9cd3-25b1-4886-be04-3cc068ab8eb4\") " pod="kube-system/kube-proxy-8nzf2" Sep 4 17:11:05.283393 kubelet[3531]: I0904 17:11:05.283320 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/164c9cd3-25b1-4886-be04-3cc068ab8eb4-xtables-lock\") pod \"kube-proxy-8nzf2\" (UID: \"164c9cd3-25b1-4886-be04-3cc068ab8eb4\") " pod="kube-system/kube-proxy-8nzf2" Sep 4 17:11:05.320921 kubelet[3531]: I0904 17:11:05.320792 3531 topology_manager.go:215] "Topology Admit Handler" podUID="0c122217-ae82-4d58-ade0-b29f12f0fce0" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-rrndp" Sep 4 17:11:05.384358 kubelet[3531]: I0904 17:11:05.384279 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntbsv\" (UniqueName: \"kubernetes.io/projected/0c122217-ae82-4d58-ade0-b29f12f0fce0-kube-api-access-ntbsv\") pod \"tigera-operator-5d56685c77-rrndp\" (UID: \"0c122217-ae82-4d58-ade0-b29f12f0fce0\") " pod="tigera-operator/tigera-operator-5d56685c77-rrndp" Sep 4 17:11:05.386059 kubelet[3531]: I0904 17:11:05.384442 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0c122217-ae82-4d58-ade0-b29f12f0fce0-var-lib-calico\") pod \"tigera-operator-5d56685c77-rrndp\" (UID: \"0c122217-ae82-4d58-ade0-b29f12f0fce0\") " pod="tigera-operator/tigera-operator-5d56685c77-rrndp" Sep 4 17:11:05.505000 containerd[2067]: time="2024-09-04T17:11:05.502970996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8nzf2,Uid:164c9cd3-25b1-4886-be04-3cc068ab8eb4,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:05.544188 containerd[2067]: time="2024-09-04T17:11:05.543862857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:05.544188 containerd[2067]: time="2024-09-04T17:11:05.543957513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:05.544188 containerd[2067]: time="2024-09-04T17:11:05.543988365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:05.544188 containerd[2067]: time="2024-09-04T17:11:05.544012005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:05.617588 containerd[2067]: time="2024-09-04T17:11:05.617457405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8nzf2,Uid:164c9cd3-25b1-4886-be04-3cc068ab8eb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e43d409888a1e523960fbb95440ea83f18b55981ad46cb2a2b95a017ff95b9f3\"" Sep 4 17:11:05.625068 containerd[2067]: time="2024-09-04T17:11:05.625007229Z" level=info msg="CreateContainer within sandbox \"e43d409888a1e523960fbb95440ea83f18b55981ad46cb2a2b95a017ff95b9f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:11:05.637668 containerd[2067]: time="2024-09-04T17:11:05.637497237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-rrndp,Uid:0c122217-ae82-4d58-ade0-b29f12f0fce0,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:11:05.645079 containerd[2067]: time="2024-09-04T17:11:05.644919969Z" level=info msg="CreateContainer within sandbox \"e43d409888a1e523960fbb95440ea83f18b55981ad46cb2a2b95a017ff95b9f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f970de8388e9c2a9aa0ddc79955a367bcbf3773d0aac98ee6cdc1f3177a1289\"" Sep 4 17:11:05.647719 containerd[2067]: time="2024-09-04T17:11:05.646075149Z" level=info msg="StartContainer for \"4f970de8388e9c2a9aa0ddc79955a367bcbf3773d0aac98ee6cdc1f3177a1289\"" Sep 4 17:11:05.686361 containerd[2067]: time="2024-09-04T17:11:05.685977381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:05.686361 containerd[2067]: time="2024-09-04T17:11:05.686074557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:05.686361 containerd[2067]: time="2024-09-04T17:11:05.686105349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:05.686629 containerd[2067]: time="2024-09-04T17:11:05.686128569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:05.782828 containerd[2067]: time="2024-09-04T17:11:05.782684662Z" level=info msg="StartContainer for \"4f970de8388e9c2a9aa0ddc79955a367bcbf3773d0aac98ee6cdc1f3177a1289\" returns successfully" Sep 4 17:11:05.825801 containerd[2067]: time="2024-09-04T17:11:05.825631738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-rrndp,Uid:0c122217-ae82-4d58-ade0-b29f12f0fce0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9b9089b7b681fb79abbf90f4dbc49fd00ae6f6e6b2f0daab55fd14e6ecf7d53b\"" Sep 4 17:11:05.834498 containerd[2067]: time="2024-09-04T17:11:05.834407686Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:11:06.408787 systemd[1]: run-containerd-runc-k8s.io-e43d409888a1e523960fbb95440ea83f18b55981ad46cb2a2b95a017ff95b9f3-runc.Y81i6K.mount: Deactivated successfully. Sep 4 17:11:07.771526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3416022512.mount: Deactivated successfully. Sep 4 17:11:09.050721 containerd[2067]: time="2024-09-04T17:11:09.050649202Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:09.052076 containerd[2067]: time="2024-09-04T17:11:09.051962350Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485927" Sep 4 17:11:09.053209 containerd[2067]: time="2024-09-04T17:11:09.053108566Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:09.057811 containerd[2067]: time="2024-09-04T17:11:09.057702010Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:09.059807 containerd[2067]: time="2024-09-04T17:11:09.059713810Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 3.225226336s" Sep 4 17:11:09.060053 containerd[2067]: time="2024-09-04T17:11:09.059774866Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Sep 4 17:11:09.065469 containerd[2067]: time="2024-09-04T17:11:09.064789186Z" level=info msg="CreateContainer within sandbox \"9b9089b7b681fb79abbf90f4dbc49fd00ae6f6e6b2f0daab55fd14e6ecf7d53b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:11:09.084954 containerd[2067]: time="2024-09-04T17:11:09.084070006Z" level=info msg="CreateContainer within sandbox \"9b9089b7b681fb79abbf90f4dbc49fd00ae6f6e6b2f0daab55fd14e6ecf7d53b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6836f130ca7819b716ad5fccd3b6f3c749fe9784b63aaf3282723a91a1e8a082\"" Sep 4 17:11:09.086923 containerd[2067]: time="2024-09-04T17:11:09.086138854Z" level=info msg="StartContainer for \"6836f130ca7819b716ad5fccd3b6f3c749fe9784b63aaf3282723a91a1e8a082\"" Sep 4 17:11:09.182921 containerd[2067]: time="2024-09-04T17:11:09.182822363Z" level=info msg="StartContainer for \"6836f130ca7819b716ad5fccd3b6f3c749fe9784b63aaf3282723a91a1e8a082\" returns successfully" Sep 4 17:11:10.087674 kubelet[3531]: I0904 17:11:10.086528 3531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8nzf2" podStartSLOduration=5.086468699 podCreationTimestamp="2024-09-04 17:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:06.077933935 +0000 UTC m=+14.390125572" watchObservedRunningTime="2024-09-04 17:11:10.086468699 +0000 UTC m=+18.398660336" Sep 4 17:11:11.937743 kubelet[3531]: I0904 17:11:11.937682 3531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-rrndp" podStartSLOduration=3.70691622 podCreationTimestamp="2024-09-04 17:11:05 +0000 UTC" firstStartedPulling="2024-09-04 17:11:05.82980229 +0000 UTC m=+14.141993915" lastFinishedPulling="2024-09-04 17:11:09.060506698 +0000 UTC m=+17.372698311" observedRunningTime="2024-09-04 17:11:10.087623255 +0000 UTC m=+18.399814892" watchObservedRunningTime="2024-09-04 17:11:11.937620616 +0000 UTC m=+20.249812277" Sep 4 17:11:14.228291 kubelet[3531]: I0904 17:11:14.227395 3531 topology_manager.go:215] "Topology Admit Handler" podUID="ccfd842c-40f6-48e0-a99e-3fae60ee8c8a" podNamespace="calico-system" podName="calico-typha-6d6c9d8c58-75scz" Sep 4 17:11:14.344815 kubelet[3531]: I0904 17:11:14.344592 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp8tr\" (UniqueName: \"kubernetes.io/projected/ccfd842c-40f6-48e0-a99e-3fae60ee8c8a-kube-api-access-zp8tr\") pod \"calico-typha-6d6c9d8c58-75scz\" (UID: \"ccfd842c-40f6-48e0-a99e-3fae60ee8c8a\") " pod="calico-system/calico-typha-6d6c9d8c58-75scz" Sep 4 17:11:14.344815 kubelet[3531]: I0904 17:11:14.344663 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccfd842c-40f6-48e0-a99e-3fae60ee8c8a-tigera-ca-bundle\") pod \"calico-typha-6d6c9d8c58-75scz\" (UID: \"ccfd842c-40f6-48e0-a99e-3fae60ee8c8a\") " pod="calico-system/calico-typha-6d6c9d8c58-75scz" Sep 4 17:11:14.344815 kubelet[3531]: I0904 17:11:14.344709 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ccfd842c-40f6-48e0-a99e-3fae60ee8c8a-typha-certs\") pod \"calico-typha-6d6c9d8c58-75scz\" (UID: \"ccfd842c-40f6-48e0-a99e-3fae60ee8c8a\") " pod="calico-system/calico-typha-6d6c9d8c58-75scz" Sep 4 17:11:14.392090 kubelet[3531]: I0904 17:11:14.391471 3531 topology_manager.go:215] "Topology Admit Handler" podUID="47e928b6-3dbf-4e7b-90fd-947de62bc1d0" podNamespace="calico-system" podName="calico-node-4n2c2" Sep 4 17:11:14.446607 kubelet[3531]: I0904 17:11:14.445127 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqssk\" (UniqueName: \"kubernetes.io/projected/47e928b6-3dbf-4e7b-90fd-947de62bc1d0-kube-api-access-vqssk\") pod \"calico-node-4n2c2\" (UID: \"47e928b6-3dbf-4e7b-90fd-947de62bc1d0\") " pod="calico-system/calico-node-4n2c2" Sep 4 17:11:14.446607 kubelet[3531]: I0904 17:11:14.445230 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/47e928b6-3dbf-4e7b-90fd-947de62bc1d0-cni-bin-dir\") pod \"calico-node-4n2c2\" (UID: \"47e928b6-3dbf-4e7b-90fd-947de62bc1d0\") " pod="calico-system/calico-node-4n2c2" Sep 4 17:11:14.446607 kubelet[3531]: I0904 17:11:14.445299 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47e928b6-3dbf-4e7b-90fd-947de62bc1d0-var-lib-calico\") pod \"calico-node-4n2c2\" (UID: \"47e928b6-3dbf-4e7b-90fd-947de62bc1d0\") " pod="calico-system/calico-node-4n2c2" Sep 4 17:11:14.446607 kubelet[3531]: I0904 17:11:14.445346 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/47e928b6-3dbf-4e7b-90fd-947de62bc1d0-flexvol-driver-host\") pod \"calico-node-4n2c2\" (UID: \"47e928b6-3dbf-4e7b-90fd-947de62bc1d0\") " pod="calico-system/calico-node-4n2c2" Sep 4 17:11:14.446607 kubelet[3531]: I0904 17:11:14.445396 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/47e928b6-3dbf-4e7b-90fd-947de62bc1d0-node-certs\") pod \"calico-node-4n2c2\" (UID: \"47e928b6-3dbf-4e7b-90fd-947de62bc1d0\") " pod="calico-system/calico-node-4n2c2" Sep 4 17:11:14.447010 kubelet[3531]: I0904 17:11:14.445445 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/47e928b6-3dbf-4e7b-90fd-947de62bc1d0-var-run-calico\") pod \"calico-node-4n2c2\" (UID: \"47e928b6-3dbf-4e7b-90fd-947de62bc1d0\") " pod="calico-system/calico-node-4n2c2" Sep 4 17:11:14.447010 kubelet[3531]: I0904 17:11:14.445513 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/47e928b6-3dbf-4e7b-90fd-947de62bc1d0-policysync\") pod \"calico-node-4n2c2\" (UID: \"47e928b6-3dbf-4e7b-90fd-947de62bc1d0\") " pod="calico-system/calico-node-4n2c2" Sep 4 17:11:14.447010 kubelet[3531]: I0904 17:11:14.445561 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47e928b6-3dbf-4e7b-90fd-947de62bc1d0-lib-modules\") pod \"calico-node-4n2c2\" (UID: \"47e928b6-3dbf-4e7b-90fd-947de62bc1d0\") " pod="calico-system/calico-node-4n2c2" Sep 4 17:11:14.447010 kubelet[3531]: I0904 17:11:14.445605 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47e928b6-3dbf-4e7b-90fd-947de62bc1d0-xtables-lock\") pod \"calico-node-4n2c2\" (UID: \"47e928b6-3dbf-4e7b-90fd-947de62bc1d0\") " pod="calico-system/calico-node-4n2c2" Sep 4 17:11:14.447010 kubelet[3531]: I0904 17:11:14.445651 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47e928b6-3dbf-4e7b-90fd-947de62bc1d0-tigera-ca-bundle\") pod \"calico-node-4n2c2\" (UID: \"47e928b6-3dbf-4e7b-90fd-947de62bc1d0\") " pod="calico-system/calico-node-4n2c2" Sep 4 17:11:14.448434 kubelet[3531]: I0904 17:11:14.445699 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/47e928b6-3dbf-4e7b-90fd-947de62bc1d0-cni-net-dir\") pod \"calico-node-4n2c2\" (UID: \"47e928b6-3dbf-4e7b-90fd-947de62bc1d0\") " pod="calico-system/calico-node-4n2c2" Sep 4 17:11:14.448434 kubelet[3531]: I0904 17:11:14.445743 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/47e928b6-3dbf-4e7b-90fd-947de62bc1d0-cni-log-dir\") pod \"calico-node-4n2c2\" (UID: \"47e928b6-3dbf-4e7b-90fd-947de62bc1d0\") " pod="calico-system/calico-node-4n2c2" Sep 4 17:11:14.562699 containerd[2067]: time="2024-09-04T17:11:14.558339881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d6c9d8c58-75scz,Uid:ccfd842c-40f6-48e0-a99e-3fae60ee8c8a,Namespace:calico-system,Attempt:0,}" Sep 4 17:11:14.592307 kubelet[3531]: I0904 17:11:14.586897 3531 topology_manager.go:215] "Topology Admit Handler" podUID="9f07346e-efe9-4d91-a84b-47e3d545d647" podNamespace="calico-system" podName="csi-node-driver-lxpqm" Sep 4 17:11:14.592307 kubelet[3531]: E0904 17:11:14.587311 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxpqm" podUID="9f07346e-efe9-4d91-a84b-47e3d545d647" Sep 4 17:11:14.592743 kubelet[3531]: E0904 17:11:14.592714 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.592959 kubelet[3531]: W0904 17:11:14.592876 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.602897 kubelet[3531]: E0904 17:11:14.593895 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.649513 kubelet[3531]: E0904 17:11:14.649474 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.649742 kubelet[3531]: W0904 17:11:14.649693 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.649940 kubelet[3531]: E0904 17:11:14.649917 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.655407 kubelet[3531]: E0904 17:11:14.655372 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.660786 kubelet[3531]: W0904 17:11:14.660343 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.660786 kubelet[3531]: E0904 17:11:14.660405 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.663296 kubelet[3531]: E0904 17:11:14.662601 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.664313 kubelet[3531]: W0904 17:11:14.663486 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.664313 kubelet[3531]: E0904 17:11:14.663563 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.666952 kubelet[3531]: E0904 17:11:14.666916 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.668193 kubelet[3531]: W0904 17:11:14.667649 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.676474 kubelet[3531]: E0904 17:11:14.674383 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.679045 kubelet[3531]: E0904 17:11:14.678418 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.679045 kubelet[3531]: W0904 17:11:14.678454 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.679045 kubelet[3531]: E0904 17:11:14.678502 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.681475 kubelet[3531]: E0904 17:11:14.681435 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.681924 kubelet[3531]: W0904 17:11:14.681683 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.681924 kubelet[3531]: E0904 17:11:14.681731 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.686208 kubelet[3531]: E0904 17:11:14.685803 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.686208 kubelet[3531]: W0904 17:11:14.685835 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.686208 kubelet[3531]: E0904 17:11:14.685870 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.687899 kubelet[3531]: E0904 17:11:14.687555 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.687899 kubelet[3531]: W0904 17:11:14.687592 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.687899 kubelet[3531]: E0904 17:11:14.687629 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.692805 kubelet[3531]: E0904 17:11:14.692478 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.692805 kubelet[3531]: W0904 17:11:14.692515 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.692805 kubelet[3531]: E0904 17:11:14.692553 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.693389 kubelet[3531]: E0904 17:11:14.693067 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.693389 kubelet[3531]: W0904 17:11:14.693089 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.693389 kubelet[3531]: E0904 17:11:14.693121 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.696638 kubelet[3531]: E0904 17:11:14.694911 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.696638 kubelet[3531]: W0904 17:11:14.696468 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.696638 kubelet[3531]: E0904 17:11:14.696511 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.700056 kubelet[3531]: E0904 17:11:14.699741 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.700056 kubelet[3531]: W0904 17:11:14.699777 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.700056 kubelet[3531]: E0904 17:11:14.699814 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.704600 kubelet[3531]: E0904 17:11:14.703464 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.704600 kubelet[3531]: W0904 17:11:14.703502 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.704600 kubelet[3531]: E0904 17:11:14.703538 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.706602 kubelet[3531]: E0904 17:11:14.705446 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.706602 kubelet[3531]: W0904 17:11:14.705479 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.706602 kubelet[3531]: E0904 17:11:14.705517 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.708967 kubelet[3531]: E0904 17:11:14.707342 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.708967 kubelet[3531]: W0904 17:11:14.707377 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.708967 kubelet[3531]: E0904 17:11:14.707417 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.711657 kubelet[3531]: E0904 17:11:14.711471 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.711657 kubelet[3531]: W0904 17:11:14.711506 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.711657 kubelet[3531]: E0904 17:11:14.711549 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.718321 kubelet[3531]: E0904 17:11:14.714628 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.718321 kubelet[3531]: W0904 17:11:14.714663 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.718321 kubelet[3531]: E0904 17:11:14.714702 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.718810 kubelet[3531]: E0904 17:11:14.718777 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.719081 kubelet[3531]: W0904 17:11:14.718946 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.719081 kubelet[3531]: E0904 17:11:14.718992 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.719992 kubelet[3531]: E0904 17:11:14.719961 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.720199 kubelet[3531]: W0904 17:11:14.720173 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.720476 kubelet[3531]: E0904 17:11:14.720344 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.721622 kubelet[3531]: E0904 17:11:14.721587 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.722584 kubelet[3531]: W0904 17:11:14.722413 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.722584 kubelet[3531]: E0904 17:11:14.722471 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.724932 kubelet[3531]: E0904 17:11:14.724609 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.724932 kubelet[3531]: W0904 17:11:14.724643 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.724932 kubelet[3531]: E0904 17:11:14.724678 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.727873 kubelet[3531]: E0904 17:11:14.727459 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.727873 kubelet[3531]: W0904 17:11:14.727492 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.727873 kubelet[3531]: E0904 17:11:14.727528 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.727873 kubelet[3531]: I0904 17:11:14.727583 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9f07346e-efe9-4d91-a84b-47e3d545d647-socket-dir\") pod \"csi-node-driver-lxpqm\" (UID: \"9f07346e-efe9-4d91-a84b-47e3d545d647\") " pod="calico-system/csi-node-driver-lxpqm" Sep 4 17:11:14.735313 containerd[2067]: time="2024-09-04T17:11:14.729624186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4n2c2,Uid:47e928b6-3dbf-4e7b-90fd-947de62bc1d0,Namespace:calico-system,Attempt:0,}" Sep 4 17:11:14.735481 kubelet[3531]: E0904 17:11:14.733069 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.735481 kubelet[3531]: W0904 17:11:14.733102 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.735481 kubelet[3531]: E0904 17:11:14.733141 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.735481 kubelet[3531]: I0904 17:11:14.733196 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9f07346e-efe9-4d91-a84b-47e3d545d647-varrun\") pod \"csi-node-driver-lxpqm\" (UID: \"9f07346e-efe9-4d91-a84b-47e3d545d647\") " pod="calico-system/csi-node-driver-lxpqm" Sep 4 17:11:14.738896 kubelet[3531]: E0904 17:11:14.738843 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.740332 kubelet[3531]: W0904 17:11:14.739154 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.744306 kubelet[3531]: E0904 17:11:14.741234 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.746996 kubelet[3531]: E0904 17:11:14.746431 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.746996 kubelet[3531]: W0904 17:11:14.746575 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.746996 kubelet[3531]: E0904 17:11:14.746611 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.746996 kubelet[3531]: I0904 17:11:14.746528 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc96s\" (UniqueName: \"kubernetes.io/projected/9f07346e-efe9-4d91-a84b-47e3d545d647-kube-api-access-jc96s\") pod \"csi-node-driver-lxpqm\" (UID: \"9f07346e-efe9-4d91-a84b-47e3d545d647\") " pod="calico-system/csi-node-driver-lxpqm" Sep 4 17:11:14.748317 kubelet[3531]: E0904 17:11:14.747479 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.748317 kubelet[3531]: W0904 17:11:14.747504 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.748728 kubelet[3531]: E0904 17:11:14.748588 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.750754 kubelet[3531]: E0904 17:11:14.750716 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.750988 kubelet[3531]: W0904 17:11:14.750958 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.752380 kubelet[3531]: E0904 17:11:14.751157 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.753718 kubelet[3531]: E0904 17:11:14.753493 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.754647 kubelet[3531]: W0904 17:11:14.754113 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.754647 kubelet[3531]: E0904 17:11:14.754402 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.757509 kubelet[3531]: E0904 17:11:14.756930 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.757509 kubelet[3531]: W0904 17:11:14.756965 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.757509 kubelet[3531]: E0904 17:11:14.757217 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.759199 kubelet[3531]: E0904 17:11:14.758962 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.759199 kubelet[3531]: W0904 17:11:14.759021 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.759199 kubelet[3531]: E0904 17:11:14.759076 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.759199 kubelet[3531]: I0904 17:11:14.759148 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f07346e-efe9-4d91-a84b-47e3d545d647-kubelet-dir\") pod \"csi-node-driver-lxpqm\" (UID: \"9f07346e-efe9-4d91-a84b-47e3d545d647\") " pod="calico-system/csi-node-driver-lxpqm" Sep 4 17:11:14.763775 kubelet[3531]: E0904 17:11:14.763726 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.763953 kubelet[3531]: W0904 17:11:14.763864 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.763953 kubelet[3531]: E0904 17:11:14.763918 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.764070 containerd[2067]: time="2024-09-04T17:11:14.763196430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:14.764070 containerd[2067]: time="2024-09-04T17:11:14.763431186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:14.764070 containerd[2067]: time="2024-09-04T17:11:14.763502478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:14.764070 containerd[2067]: time="2024-09-04T17:11:14.763539330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:14.771010 kubelet[3531]: E0904 17:11:14.769480 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.771010 kubelet[3531]: W0904 17:11:14.769520 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.771010 kubelet[3531]: E0904 17:11:14.769713 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.773834 kubelet[3531]: E0904 17:11:14.772535 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.773834 kubelet[3531]: W0904 17:11:14.772684 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.773834 kubelet[3531]: E0904 17:11:14.773398 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.773834 kubelet[3531]: W0904 17:11:14.773417 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.773834 kubelet[3531]: E0904 17:11:14.773481 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.773834 kubelet[3531]: E0904 17:11:14.773481 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.773834 kubelet[3531]: I0904 17:11:14.773616 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9f07346e-efe9-4d91-a84b-47e3d545d647-registration-dir\") pod \"csi-node-driver-lxpqm\" (UID: \"9f07346e-efe9-4d91-a84b-47e3d545d647\") " pod="calico-system/csi-node-driver-lxpqm" Sep 4 17:11:14.778295 kubelet[3531]: E0904 17:11:14.774462 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.778295 kubelet[3531]: W0904 17:11:14.774532 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.778295 kubelet[3531]: E0904 17:11:14.774572 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.778295 kubelet[3531]: E0904 17:11:14.775470 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.778295 kubelet[3531]: W0904 17:11:14.775497 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.778295 kubelet[3531]: E0904 17:11:14.775545 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.877972 kubelet[3531]: E0904 17:11:14.876790 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.877972 kubelet[3531]: W0904 17:11:14.876827 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.877972 kubelet[3531]: E0904 17:11:14.876866 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.881308 kubelet[3531]: E0904 17:11:14.880591 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.881308 kubelet[3531]: W0904 17:11:14.880753 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.881308 kubelet[3531]: E0904 17:11:14.881002 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.884944 kubelet[3531]: E0904 17:11:14.884688 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.884944 kubelet[3531]: W0904 17:11:14.884934 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.885140 kubelet[3531]: E0904 17:11:14.885106 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.898014 kubelet[3531]: E0904 17:11:14.895943 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.898014 kubelet[3531]: W0904 17:11:14.895978 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.898014 kubelet[3531]: E0904 17:11:14.896018 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.905252 kubelet[3531]: E0904 17:11:14.905193 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.905252 kubelet[3531]: W0904 17:11:14.905238 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.911309 kubelet[3531]: E0904 17:11:14.910061 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.911309 kubelet[3531]: E0904 17:11:14.910326 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.911309 kubelet[3531]: W0904 17:11:14.910625 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.916381 kubelet[3531]: E0904 17:11:14.911944 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.916381 kubelet[3531]: W0904 17:11:14.911973 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.916381 kubelet[3531]: E0904 17:11:14.913476 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.916381 kubelet[3531]: W0904 17:11:14.913504 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.918042 containerd[2067]: time="2024-09-04T17:11:14.917381227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:14.918042 containerd[2067]: time="2024-09-04T17:11:14.917474203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:14.918042 containerd[2067]: time="2024-09-04T17:11:14.917505787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:14.918042 containerd[2067]: time="2024-09-04T17:11:14.917530111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:14.918340 kubelet[3531]: E0904 17:11:14.918192 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.919497 kubelet[3531]: W0904 17:11:14.918218 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.919497 kubelet[3531]: E0904 17:11:14.919388 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.920205 kubelet[3531]: E0904 17:11:14.919945 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.920205 kubelet[3531]: W0904 17:11:14.919983 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.920205 kubelet[3531]: E0904 17:11:14.920015 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.920205 kubelet[3531]: E0904 17:11:14.920067 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.921573 kubelet[3531]: E0904 17:11:14.921071 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.921573 kubelet[3531]: W0904 17:11:14.921110 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.921573 kubelet[3531]: E0904 17:11:14.921147 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.923285 kubelet[3531]: E0904 17:11:14.923135 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.923285 kubelet[3531]: W0904 17:11:14.923175 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.923285 kubelet[3531]: E0904 17:11:14.923228 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.925666 kubelet[3531]: E0904 17:11:14.925156 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.928556 kubelet[3531]: E0904 17:11:14.928350 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.928556 kubelet[3531]: W0904 17:11:14.928397 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.928556 kubelet[3531]: E0904 17:11:14.928441 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.933315 kubelet[3531]: E0904 17:11:14.933004 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.938222 kubelet[3531]: W0904 17:11:14.937935 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.939704 kubelet[3531]: E0904 17:11:14.933851 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.948712 kubelet[3531]: E0904 17:11:14.940964 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.948712 kubelet[3531]: W0904 17:11:14.946448 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.948712 kubelet[3531]: E0904 17:11:14.946494 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.949104 kubelet[3531]: E0904 17:11:14.949070 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.949519 kubelet[3531]: W0904 17:11:14.949393 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.950831 kubelet[3531]: E0904 17:11:14.949902 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.952522 kubelet[3531]: E0904 17:11:14.952338 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.953462 kubelet[3531]: W0904 17:11:14.953152 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.955128 kubelet[3531]: E0904 17:11:14.954355 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.957619 kubelet[3531]: E0904 17:11:14.956971 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.958189 kubelet[3531]: E0904 17:11:14.940999 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.959742 kubelet[3531]: W0904 17:11:14.959509 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.962134 kubelet[3531]: E0904 17:11:14.961185 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.964170 kubelet[3531]: W0904 17:11:14.962528 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.965834 kubelet[3531]: E0904 17:11:14.965217 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.969461 kubelet[3531]: W0904 17:11:14.966407 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.969461 kubelet[3531]: E0904 17:11:14.966467 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.975249 kubelet[3531]: E0904 17:11:14.972366 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.975249 kubelet[3531]: W0904 17:11:14.974330 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.975249 kubelet[3531]: E0904 17:11:14.974386 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.978764 kubelet[3531]: E0904 17:11:14.965501 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.978920 kubelet[3531]: E0904 17:11:14.965518 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.978920 kubelet[3531]: E0904 17:11:14.978904 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.979056 kubelet[3531]: W0904 17:11:14.978922 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.979056 kubelet[3531]: E0904 17:11:14.978954 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.982032 kubelet[3531]: E0904 17:11:14.981720 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.982032 kubelet[3531]: W0904 17:11:14.981754 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.982032 kubelet[3531]: E0904 17:11:14.981796 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.985880 kubelet[3531]: E0904 17:11:14.985829 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.985880 kubelet[3531]: W0904 17:11:14.985861 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.986083 kubelet[3531]: E0904 17:11:14.985898 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:14.989099 kubelet[3531]: E0904 17:11:14.989060 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:14.989099 kubelet[3531]: W0904 17:11:14.989092 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:14.989316 kubelet[3531]: E0904 17:11:14.989129 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:15.017704 kubelet[3531]: E0904 17:11:15.017472 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:15.017704 kubelet[3531]: W0904 17:11:15.017530 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:15.017704 kubelet[3531]: E0904 17:11:15.017569 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:15.076074 containerd[2067]: time="2024-09-04T17:11:15.076011028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d6c9d8c58-75scz,Uid:ccfd842c-40f6-48e0-a99e-3fae60ee8c8a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d33950119567f079b39245894355c1671ab2a979ec4e4e0281e07868402a5e6\"" Sep 4 17:11:15.084884 containerd[2067]: time="2024-09-04T17:11:15.084533296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:11:15.108158 containerd[2067]: time="2024-09-04T17:11:15.108016276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4n2c2,Uid:47e928b6-3dbf-4e7b-90fd-947de62bc1d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"f68fe1eac4626f2990f8e198172a96ef0a919003ee75f32a2c33d723857cc8a3\"" Sep 4 17:11:15.913347 kubelet[3531]: E0904 17:11:15.911367 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxpqm" podUID="9f07346e-efe9-4d91-a84b-47e3d545d647" Sep 4 17:11:17.838679 containerd[2067]: time="2024-09-04T17:11:17.838599346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:17.840219 containerd[2067]: time="2024-09-04T17:11:17.840119962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Sep 4 17:11:17.843093 containerd[2067]: time="2024-09-04T17:11:17.842774722Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:17.848246 containerd[2067]: time="2024-09-04T17:11:17.848178658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:17.851096 containerd[2067]: time="2024-09-04T17:11:17.850940506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.76634067s" Sep 4 17:11:17.851096 containerd[2067]: time="2024-09-04T17:11:17.851029894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Sep 4 17:11:17.857760 containerd[2067]: time="2024-09-04T17:11:17.857693218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:11:17.899034 containerd[2067]: time="2024-09-04T17:11:17.896214934Z" level=info msg="CreateContainer within sandbox \"6d33950119567f079b39245894355c1671ab2a979ec4e4e0281e07868402a5e6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:11:17.914530 kubelet[3531]: E0904 17:11:17.912004 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxpqm" podUID="9f07346e-efe9-4d91-a84b-47e3d545d647" Sep 4 17:11:17.939378 containerd[2067]: time="2024-09-04T17:11:17.939302998Z" level=info msg="CreateContainer within sandbox \"6d33950119567f079b39245894355c1671ab2a979ec4e4e0281e07868402a5e6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"127f7504ab497b3431b1dea2dc62b544a8c9e585f4b1b323ef18d0cf6f931639\"" Sep 4 17:11:17.942376 containerd[2067]: time="2024-09-04T17:11:17.941303458Z" level=info msg="StartContainer for \"127f7504ab497b3431b1dea2dc62b544a8c9e585f4b1b323ef18d0cf6f931639\"" Sep 4 17:11:18.136321 containerd[2067]: time="2024-09-04T17:11:18.134804011Z" level=info msg="StartContainer for \"127f7504ab497b3431b1dea2dc62b544a8c9e585f4b1b323ef18d0cf6f931639\" returns successfully" Sep 4 17:11:18.872540 systemd[1]: run-containerd-runc-k8s.io-127f7504ab497b3431b1dea2dc62b544a8c9e585f4b1b323ef18d0cf6f931639-runc.5mcJCA.mount: Deactivated successfully. Sep 4 17:11:19.145515 kubelet[3531]: I0904 17:11:19.144201 3531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6d6c9d8c58-75scz" podStartSLOduration=2.372726354 podCreationTimestamp="2024-09-04 17:11:14 +0000 UTC" firstStartedPulling="2024-09-04 17:11:15.081167812 +0000 UTC m=+23.393359437" lastFinishedPulling="2024-09-04 17:11:17.852581866 +0000 UTC m=+26.164773491" observedRunningTime="2024-09-04 17:11:19.143817152 +0000 UTC m=+27.456008789" watchObservedRunningTime="2024-09-04 17:11:19.144140408 +0000 UTC m=+27.456332057" Sep 4 17:11:19.170312 kubelet[3531]: E0904 17:11:19.169519 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.170312 kubelet[3531]: W0904 17:11:19.169586 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.170312 kubelet[3531]: E0904 17:11:19.169626 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.170312 kubelet[3531]: E0904 17:11:19.170233 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.170312 kubelet[3531]: W0904 17:11:19.170252 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.170312 kubelet[3531]: E0904 17:11:19.170322 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.171724 kubelet[3531]: E0904 17:11:19.170832 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.171724 kubelet[3531]: W0904 17:11:19.170861 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.171724 kubelet[3531]: E0904 17:11:19.170918 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.171724 kubelet[3531]: E0904 17:11:19.171498 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.171724 kubelet[3531]: W0904 17:11:19.171517 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.171724 kubelet[3531]: E0904 17:11:19.171569 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.172102 kubelet[3531]: E0904 17:11:19.172058 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.172102 kubelet[3531]: W0904 17:11:19.172075 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.172102 kubelet[3531]: E0904 17:11:19.172102 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.174877 kubelet[3531]: E0904 17:11:19.172576 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.174877 kubelet[3531]: W0904 17:11:19.172604 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.174877 kubelet[3531]: E0904 17:11:19.172633 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.174877 kubelet[3531]: E0904 17:11:19.172999 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.174877 kubelet[3531]: W0904 17:11:19.173019 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.174877 kubelet[3531]: E0904 17:11:19.173046 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.174877 kubelet[3531]: E0904 17:11:19.173459 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.174877 kubelet[3531]: W0904 17:11:19.173478 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.174877 kubelet[3531]: E0904 17:11:19.173505 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.174877 kubelet[3531]: E0904 17:11:19.173868 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.175506 kubelet[3531]: W0904 17:11:19.173888 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.175506 kubelet[3531]: E0904 17:11:19.173914 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.175506 kubelet[3531]: E0904 17:11:19.174228 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.175506 kubelet[3531]: W0904 17:11:19.174244 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.175506 kubelet[3531]: E0904 17:11:19.174286 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.175506 kubelet[3531]: E0904 17:11:19.174710 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.175506 kubelet[3531]: W0904 17:11:19.174730 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.175506 kubelet[3531]: E0904 17:11:19.174758 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.175506 kubelet[3531]: E0904 17:11:19.175098 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.175506 kubelet[3531]: W0904 17:11:19.175115 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.176009 kubelet[3531]: E0904 17:11:19.175141 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.176009 kubelet[3531]: E0904 17:11:19.175453 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.176009 kubelet[3531]: W0904 17:11:19.175468 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.176009 kubelet[3531]: E0904 17:11:19.175491 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.176009 kubelet[3531]: E0904 17:11:19.175805 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.176009 kubelet[3531]: W0904 17:11:19.175820 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.176009 kubelet[3531]: E0904 17:11:19.175848 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.176369 kubelet[3531]: E0904 17:11:19.176165 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.176369 kubelet[3531]: W0904 17:11:19.176180 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.176369 kubelet[3531]: E0904 17:11:19.176204 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.184525 kubelet[3531]: E0904 17:11:19.183092 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.184525 kubelet[3531]: W0904 17:11:19.183131 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.184525 kubelet[3531]: E0904 17:11:19.183168 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.184525 kubelet[3531]: E0904 17:11:19.183673 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.184525 kubelet[3531]: W0904 17:11:19.183694 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.184525 kubelet[3531]: E0904 17:11:19.183767 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.184525 kubelet[3531]: E0904 17:11:19.184130 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.184525 kubelet[3531]: W0904 17:11:19.184152 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.184525 kubelet[3531]: E0904 17:11:19.184195 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.186933 kubelet[3531]: E0904 17:11:19.186830 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.187472 kubelet[3531]: W0904 17:11:19.187151 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.188236 kubelet[3531]: E0904 17:11:19.187249 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.188910 kubelet[3531]: E0904 17:11:19.188568 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.188910 kubelet[3531]: W0904 17:11:19.188591 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.188910 kubelet[3531]: E0904 17:11:19.188640 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.189349 kubelet[3531]: E0904 17:11:19.189327 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.189484 kubelet[3531]: W0904 17:11:19.189461 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.189708 kubelet[3531]: E0904 17:11:19.189653 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.190307 kubelet[3531]: E0904 17:11:19.190241 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.190421 kubelet[3531]: W0904 17:11:19.190398 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.190695 kubelet[3531]: E0904 17:11:19.190645 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.191058 kubelet[3531]: E0904 17:11:19.191019 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.191190 kubelet[3531]: W0904 17:11:19.191155 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.191598 kubelet[3531]: E0904 17:11:19.191577 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.192037 kubelet[3531]: E0904 17:11:19.191818 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.192037 kubelet[3531]: W0904 17:11:19.191964 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.192037 kubelet[3531]: E0904 17:11:19.192005 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.192736 kubelet[3531]: E0904 17:11:19.192706 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.192939 kubelet[3531]: W0904 17:11:19.192846 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.193225 kubelet[3531]: E0904 17:11:19.193105 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.193796 kubelet[3531]: E0904 17:11:19.193427 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.193796 kubelet[3531]: W0904 17:11:19.193444 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.193796 kubelet[3531]: E0904 17:11:19.193480 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.195150 kubelet[3531]: E0904 17:11:19.194858 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.195150 kubelet[3531]: W0904 17:11:19.194888 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.195150 kubelet[3531]: E0904 17:11:19.194935 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.195610 kubelet[3531]: E0904 17:11:19.195586 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.195728 kubelet[3531]: W0904 17:11:19.195703 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.196141 kubelet[3531]: E0904 17:11:19.195823 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.196299 kubelet[3531]: E0904 17:11:19.196177 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.196299 kubelet[3531]: W0904 17:11:19.196197 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.196299 kubelet[3531]: E0904 17:11:19.196226 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.197644 kubelet[3531]: E0904 17:11:19.197584 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.197644 kubelet[3531]: W0904 17:11:19.197620 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.198521 kubelet[3531]: E0904 17:11:19.197671 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.200035 kubelet[3531]: E0904 17:11:19.199033 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.200035 kubelet[3531]: W0904 17:11:19.199202 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.200035 kubelet[3531]: E0904 17:11:19.199579 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.201172 kubelet[3531]: E0904 17:11:19.200937 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.201172 kubelet[3531]: W0904 17:11:19.200973 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.202905 kubelet[3531]: E0904 17:11:19.201749 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.203597 kubelet[3531]: E0904 17:11:19.201766 3531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.204086 kubelet[3531]: W0904 17:11:19.204048 3531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.206073 kubelet[3531]: E0904 17:11:19.204410 3531 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.610922 containerd[2067]: time="2024-09-04T17:11:19.609486646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:19.611661 containerd[2067]: time="2024-09-04T17:11:19.611612194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Sep 4 17:11:19.615719 containerd[2067]: time="2024-09-04T17:11:19.615665014Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:19.622686 containerd[2067]: time="2024-09-04T17:11:19.622625458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:19.625200 containerd[2067]: time="2024-09-04T17:11:19.625124722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.767082064s" Sep 4 17:11:19.625375 containerd[2067]: time="2024-09-04T17:11:19.625213174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Sep 4 17:11:19.630971 containerd[2067]: time="2024-09-04T17:11:19.630639154Z" level=info msg="CreateContainer within sandbox \"f68fe1eac4626f2990f8e198172a96ef0a919003ee75f32a2c33d723857cc8a3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:11:19.666086 containerd[2067]: time="2024-09-04T17:11:19.665916143Z" level=info msg="CreateContainer within sandbox \"f68fe1eac4626f2990f8e198172a96ef0a919003ee75f32a2c33d723857cc8a3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"240a09cc4fe4bb334d84a0c5082ca313b96311be03b1d95a35d6585aa8c9770e\"" Sep 4 17:11:19.669974 containerd[2067]: time="2024-09-04T17:11:19.669858635Z" level=info msg="StartContainer for \"240a09cc4fe4bb334d84a0c5082ca313b96311be03b1d95a35d6585aa8c9770e\"" Sep 4 17:11:19.822892 containerd[2067]: time="2024-09-04T17:11:19.822725555Z" level=info msg="StartContainer for \"240a09cc4fe4bb334d84a0c5082ca313b96311be03b1d95a35d6585aa8c9770e\" returns successfully" Sep 4 17:11:19.916506 kubelet[3531]: E0904 17:11:19.912567 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxpqm" podUID="9f07346e-efe9-4d91-a84b-47e3d545d647" Sep 4 17:11:19.927398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-240a09cc4fe4bb334d84a0c5082ca313b96311be03b1d95a35d6585aa8c9770e-rootfs.mount: Deactivated successfully. Sep 4 17:11:20.134934 kubelet[3531]: I0904 17:11:20.133756 3531 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:11:20.160509 containerd[2067]: time="2024-09-04T17:11:20.160323177Z" level=info msg="shim disconnected" id=240a09cc4fe4bb334d84a0c5082ca313b96311be03b1d95a35d6585aa8c9770e namespace=k8s.io Sep 4 17:11:20.164674 containerd[2067]: time="2024-09-04T17:11:20.164337549Z" level=warning msg="cleaning up after shim disconnected" id=240a09cc4fe4bb334d84a0c5082ca313b96311be03b1d95a35d6585aa8c9770e namespace=k8s.io Sep 4 17:11:20.164674 containerd[2067]: time="2024-09-04T17:11:20.164416629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:11:21.151307 containerd[2067]: time="2024-09-04T17:11:21.148834282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:11:21.914801 kubelet[3531]: E0904 17:11:21.912753 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxpqm" podUID="9f07346e-efe9-4d91-a84b-47e3d545d647" Sep 4 17:11:23.912309 kubelet[3531]: E0904 17:11:23.911593 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxpqm" podUID="9f07346e-efe9-4d91-a84b-47e3d545d647" Sep 4 17:11:25.414515 containerd[2067]: time="2024-09-04T17:11:25.414448623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:25.416018 containerd[2067]: time="2024-09-04T17:11:25.415960875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Sep 4 17:11:25.416785 containerd[2067]: time="2024-09-04T17:11:25.416698719Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:25.422360 containerd[2067]: time="2024-09-04T17:11:25.422294439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:25.424334 containerd[2067]: time="2024-09-04T17:11:25.424133943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 4.275208461s" Sep 4 17:11:25.424334 containerd[2067]: time="2024-09-04T17:11:25.424186359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Sep 4 17:11:25.430450 containerd[2067]: time="2024-09-04T17:11:25.430388931Z" level=info msg="CreateContainer within sandbox \"f68fe1eac4626f2990f8e198172a96ef0a919003ee75f32a2c33d723857cc8a3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:11:25.465919 containerd[2067]: time="2024-09-04T17:11:25.465114555Z" level=info msg="CreateContainer within sandbox \"f68fe1eac4626f2990f8e198172a96ef0a919003ee75f32a2c33d723857cc8a3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"46b6b6b0144e6c538be55ec428c655cd063e9f9509d90c30740774d121e824ef\"" Sep 4 17:11:25.468699 containerd[2067]: time="2024-09-04T17:11:25.468196023Z" level=info msg="StartContainer for \"46b6b6b0144e6c538be55ec428c655cd063e9f9509d90c30740774d121e824ef\"" Sep 4 17:11:25.472637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872285898.mount: Deactivated successfully. Sep 4 17:11:25.580557 containerd[2067]: time="2024-09-04T17:11:25.580400212Z" level=info msg="StartContainer for \"46b6b6b0144e6c538be55ec428c655cd063e9f9509d90c30740774d121e824ef\" returns successfully" Sep 4 17:11:25.911947 kubelet[3531]: E0904 17:11:25.911733 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxpqm" podUID="9f07346e-efe9-4d91-a84b-47e3d545d647" Sep 4 17:11:26.444546 containerd[2067]: time="2024-09-04T17:11:26.444482260Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:11:26.487830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46b6b6b0144e6c538be55ec428c655cd063e9f9509d90c30740774d121e824ef-rootfs.mount: Deactivated successfully. Sep 4 17:11:26.515430 kubelet[3531]: I0904 17:11:26.514871 3531 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:11:26.559860 kubelet[3531]: I0904 17:11:26.553431 3531 topology_manager.go:215] "Topology Admit Handler" podUID="d6f19b88-2c58-4fc4-8440-e3af1c017a65" podNamespace="kube-system" podName="coredns-5dd5756b68-xt7z7" Sep 4 17:11:26.563423 kubelet[3531]: I0904 17:11:26.563381 3531 topology_manager.go:215] "Topology Admit Handler" podUID="d22f7da3-7f61-4333-ae07-e54a750d8f41" podNamespace="kube-system" podName="coredns-5dd5756b68-vz5lc" Sep 4 17:11:26.568937 kubelet[3531]: I0904 17:11:26.568872 3531 topology_manager.go:215] "Topology Admit Handler" podUID="66a18fea-3d72-4ed1-bd9b-d7382885dcbb" podNamespace="calico-system" podName="calico-kube-controllers-757c5dc566-25jwl" Sep 4 17:11:26.654626 kubelet[3531]: I0904 17:11:26.654559 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66a18fea-3d72-4ed1-bd9b-d7382885dcbb-tigera-ca-bundle\") pod \"calico-kube-controllers-757c5dc566-25jwl\" (UID: \"66a18fea-3d72-4ed1-bd9b-d7382885dcbb\") " pod="calico-system/calico-kube-controllers-757c5dc566-25jwl" Sep 4 17:11:26.654853 kubelet[3531]: I0904 17:11:26.654687 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d22f7da3-7f61-4333-ae07-e54a750d8f41-config-volume\") pod \"coredns-5dd5756b68-vz5lc\" (UID: \"d22f7da3-7f61-4333-ae07-e54a750d8f41\") " pod="kube-system/coredns-5dd5756b68-vz5lc" Sep 4 17:11:26.654853 kubelet[3531]: I0904 17:11:26.654754 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zpgk\" (UniqueName: \"kubernetes.io/projected/d6f19b88-2c58-4fc4-8440-e3af1c017a65-kube-api-access-7zpgk\") pod \"coredns-5dd5756b68-xt7z7\" (UID: \"d6f19b88-2c58-4fc4-8440-e3af1c017a65\") " pod="kube-system/coredns-5dd5756b68-xt7z7" Sep 4 17:11:26.654853 kubelet[3531]: I0904 17:11:26.654817 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j9dg\" (UniqueName: \"kubernetes.io/projected/66a18fea-3d72-4ed1-bd9b-d7382885dcbb-kube-api-access-6j9dg\") pod \"calico-kube-controllers-757c5dc566-25jwl\" (UID: \"66a18fea-3d72-4ed1-bd9b-d7382885dcbb\") " pod="calico-system/calico-kube-controllers-757c5dc566-25jwl" Sep 4 17:11:26.655025 kubelet[3531]: I0904 17:11:26.654871 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhjh7\" (UniqueName: \"kubernetes.io/projected/d22f7da3-7f61-4333-ae07-e54a750d8f41-kube-api-access-jhjh7\") pod \"coredns-5dd5756b68-vz5lc\" (UID: \"d22f7da3-7f61-4333-ae07-e54a750d8f41\") " pod="kube-system/coredns-5dd5756b68-vz5lc" Sep 4 17:11:26.655025 kubelet[3531]: I0904 17:11:26.654917 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6f19b88-2c58-4fc4-8440-e3af1c017a65-config-volume\") pod \"coredns-5dd5756b68-xt7z7\" (UID: \"d6f19b88-2c58-4fc4-8440-e3af1c017a65\") " pod="kube-system/coredns-5dd5756b68-xt7z7" Sep 4 17:11:26.895346 containerd[2067]: time="2024-09-04T17:11:26.894816943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xt7z7,Uid:d6f19b88-2c58-4fc4-8440-e3af1c017a65,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:26.906159 containerd[2067]: time="2024-09-04T17:11:26.905875567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vz5lc,Uid:d22f7da3-7f61-4333-ae07-e54a750d8f41,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:26.914656 containerd[2067]: time="2024-09-04T17:11:26.914497447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757c5dc566-25jwl,Uid:66a18fea-3d72-4ed1-bd9b-d7382885dcbb,Namespace:calico-system,Attempt:0,}" Sep 4 17:11:27.334450 containerd[2067]: time="2024-09-04T17:11:27.334150229Z" level=info msg="shim disconnected" id=46b6b6b0144e6c538be55ec428c655cd063e9f9509d90c30740774d121e824ef namespace=k8s.io Sep 4 17:11:27.334450 containerd[2067]: time="2024-09-04T17:11:27.334245149Z" level=warning msg="cleaning up after shim disconnected" id=46b6b6b0144e6c538be55ec428c655cd063e9f9509d90c30740774d121e824ef namespace=k8s.io Sep 4 17:11:27.334450 containerd[2067]: time="2024-09-04T17:11:27.334303409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:11:27.560299 containerd[2067]: time="2024-09-04T17:11:27.558244710Z" level=error msg="Failed to destroy network for sandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:27.560299 containerd[2067]: time="2024-09-04T17:11:27.558343866Z" level=error msg="Failed to destroy network for sandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:27.563022 containerd[2067]: time="2024-09-04T17:11:27.560644446Z" level=error msg="encountered an error cleaning up failed sandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:27.563022 containerd[2067]: time="2024-09-04T17:11:27.561595590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xt7z7,Uid:d6f19b88-2c58-4fc4-8440-e3af1c017a65,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:27.565007 kubelet[3531]: E0904 17:11:27.563535 3531 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:27.565007 kubelet[3531]: E0904 17:11:27.563629 3531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-xt7z7" Sep 4 17:11:27.565007 kubelet[3531]: E0904 17:11:27.563669 3531 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-xt7z7" Sep 4 17:11:27.567395 kubelet[3531]: E0904 17:11:27.563762 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-xt7z7_kube-system(d6f19b88-2c58-4fc4-8440-e3af1c017a65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-xt7z7_kube-system(d6f19b88-2c58-4fc4-8440-e3af1c017a65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-xt7z7" podUID="d6f19b88-2c58-4fc4-8440-e3af1c017a65" Sep 4 17:11:27.565941 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e-shm.mount: Deactivated successfully. Sep 4 17:11:27.571844 containerd[2067]: time="2024-09-04T17:11:27.567931866Z" level=error msg="encountered an error cleaning up failed sandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:27.571844 containerd[2067]: time="2024-09-04T17:11:27.570689262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757c5dc566-25jwl,Uid:66a18fea-3d72-4ed1-bd9b-d7382885dcbb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:27.572030 kubelet[3531]: E0904 17:11:27.570984 3531 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:27.572030 kubelet[3531]: E0904 17:11:27.571058 3531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-757c5dc566-25jwl" Sep 4 17:11:27.572030 kubelet[3531]: E0904 17:11:27.571100 3531 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-757c5dc566-25jwl" Sep 4 17:11:27.572219 kubelet[3531]: E0904 17:11:27.571198 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-757c5dc566-25jwl_calico-system(66a18fea-3d72-4ed1-bd9b-d7382885dcbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-757c5dc566-25jwl_calico-system(66a18fea-3d72-4ed1-bd9b-d7382885dcbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-757c5dc566-25jwl" podUID="66a18fea-3d72-4ed1-bd9b-d7382885dcbb" Sep 4 17:11:27.575381 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13-shm.mount: Deactivated successfully. Sep 4 17:11:27.587568 containerd[2067]: time="2024-09-04T17:11:27.583516386Z" level=error msg="Failed to destroy network for sandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:27.587568 containerd[2067]: time="2024-09-04T17:11:27.584106510Z" level=error msg="encountered an error cleaning up failed sandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:27.587568 containerd[2067]: time="2024-09-04T17:11:27.584175330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vz5lc,Uid:d22f7da3-7f61-4333-ae07-e54a750d8f41,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:27.587817 kubelet[3531]: E0904 17:11:27.585536 3531 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:27.587817 kubelet[3531]: E0904 17:11:27.585608 3531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-vz5lc" Sep 4 17:11:27.587817 kubelet[3531]: E0904 17:11:27.585666 3531 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-vz5lc" Sep 4 17:11:27.588006 kubelet[3531]: E0904 17:11:27.585741 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-vz5lc_kube-system(d22f7da3-7f61-4333-ae07-e54a750d8f41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-vz5lc_kube-system(d22f7da3-7f61-4333-ae07-e54a750d8f41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-vz5lc" podUID="d22f7da3-7f61-4333-ae07-e54a750d8f41" Sep 4 17:11:27.595283 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c-shm.mount: Deactivated successfully. Sep 4 17:11:27.916900 containerd[2067]: time="2024-09-04T17:11:27.916821920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxpqm,Uid:9f07346e-efe9-4d91-a84b-47e3d545d647,Namespace:calico-system,Attempt:0,}" Sep 4 17:11:28.018800 containerd[2067]: time="2024-09-04T17:11:28.018715912Z" level=error msg="Failed to destroy network for sandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:28.019417 containerd[2067]: time="2024-09-04T17:11:28.019359340Z" level=error msg="encountered an error cleaning up failed sandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:28.019498 containerd[2067]: time="2024-09-04T17:11:28.019438600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxpqm,Uid:9f07346e-efe9-4d91-a84b-47e3d545d647,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:28.019831 kubelet[3531]: E0904 17:11:28.019790 3531 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:28.019949 kubelet[3531]: E0904 17:11:28.019879 3531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxpqm" Sep 4 17:11:28.019949 kubelet[3531]: E0904 17:11:28.019919 3531 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxpqm" Sep 4 17:11:28.021335 kubelet[3531]: E0904 17:11:28.020423 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lxpqm_calico-system(9f07346e-efe9-4d91-a84b-47e3d545d647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lxpqm_calico-system(9f07346e-efe9-4d91-a84b-47e3d545d647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxpqm" podUID="9f07346e-efe9-4d91-a84b-47e3d545d647" Sep 4 17:11:28.174965 kubelet[3531]: I0904 17:11:28.173780 3531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:28.176605 containerd[2067]: time="2024-09-04T17:11:28.176522261Z" level=info msg="StopPodSandbox for \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\"" Sep 4 17:11:28.178833 containerd[2067]: time="2024-09-04T17:11:28.177983453Z" level=info msg="Ensure that sandbox 338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e in task-service has been cleanup successfully" Sep 4 17:11:28.186400 containerd[2067]: time="2024-09-04T17:11:28.186103481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:11:28.188828 kubelet[3531]: I0904 17:11:28.188564 3531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:28.194885 containerd[2067]: time="2024-09-04T17:11:28.192238145Z" level=info msg="StopPodSandbox for \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\"" Sep 4 17:11:28.195993 kubelet[3531]: I0904 17:11:28.195961 3531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:28.200373 containerd[2067]: time="2024-09-04T17:11:28.199626821Z" level=info msg="StopPodSandbox for \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\"" Sep 4 17:11:28.200373 containerd[2067]: time="2024-09-04T17:11:28.199961873Z" level=info msg="Ensure that sandbox 3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13 in task-service has been cleanup successfully" Sep 4 17:11:28.207054 containerd[2067]: time="2024-09-04T17:11:28.203657105Z" level=info msg="Ensure that sandbox b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12 in task-service has been cleanup successfully" Sep 4 17:11:28.217776 kubelet[3531]: I0904 17:11:28.217719 3531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:28.221883 containerd[2067]: time="2024-09-04T17:11:28.221681081Z" level=info msg="StopPodSandbox for \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\"" Sep 4 17:11:28.222144 containerd[2067]: time="2024-09-04T17:11:28.222047921Z" level=info msg="Ensure that sandbox a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c in task-service has been cleanup successfully" Sep 4 17:11:28.332862 containerd[2067]: time="2024-09-04T17:11:28.332779998Z" level=error msg="StopPodSandbox for \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\" failed" error="failed to destroy network for sandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:28.333782 kubelet[3531]: E0904 17:11:28.333444 3531 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:28.333782 kubelet[3531]: E0904 17:11:28.333552 3531 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13"} Sep 4 17:11:28.333782 kubelet[3531]: E0904 17:11:28.333624 3531 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66a18fea-3d72-4ed1-bd9b-d7382885dcbb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:11:28.333782 kubelet[3531]: E0904 17:11:28.333676 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66a18fea-3d72-4ed1-bd9b-d7382885dcbb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-757c5dc566-25jwl" podUID="66a18fea-3d72-4ed1-bd9b-d7382885dcbb" Sep 4 17:11:28.344947 containerd[2067]: time="2024-09-04T17:11:28.344883810Z" level=error msg="StopPodSandbox for \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\" failed" error="failed to destroy network for sandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:28.345700 kubelet[3531]: E0904 17:11:28.345646 3531 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:28.345848 kubelet[3531]: E0904 17:11:28.345716 3531 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e"} Sep 4 17:11:28.345848 kubelet[3531]: E0904 17:11:28.345780 3531 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6f19b88-2c58-4fc4-8440-e3af1c017a65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:11:28.345848 kubelet[3531]: E0904 17:11:28.345836 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6f19b88-2c58-4fc4-8440-e3af1c017a65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-xt7z7" podUID="d6f19b88-2c58-4fc4-8440-e3af1c017a65" Sep 4 17:11:28.347127 containerd[2067]: time="2024-09-04T17:11:28.346997778Z" level=error msg="StopPodSandbox for \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\" failed" error="failed to destroy network for sandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:28.347752 kubelet[3531]: E0904 17:11:28.347491 3531 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:28.348126 kubelet[3531]: E0904 17:11:28.347864 3531 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12"} Sep 4 17:11:28.348126 kubelet[3531]: E0904 17:11:28.347979 3531 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f07346e-efe9-4d91-a84b-47e3d545d647\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:11:28.348126 kubelet[3531]: E0904 17:11:28.348064 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f07346e-efe9-4d91-a84b-47e3d545d647\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxpqm" podUID="9f07346e-efe9-4d91-a84b-47e3d545d647" Sep 4 17:11:28.365413 containerd[2067]: time="2024-09-04T17:11:28.365341218Z" level=error msg="StopPodSandbox for \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\" failed" error="failed to destroy network for sandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:28.365943 kubelet[3531]: E0904 17:11:28.365728 3531 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:28.365943 kubelet[3531]: E0904 17:11:28.365792 3531 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c"} Sep 4 17:11:28.365943 kubelet[3531]: E0904 17:11:28.365857 3531 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d22f7da3-7f61-4333-ae07-e54a750d8f41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:11:28.365943 kubelet[3531]: E0904 17:11:28.365908 3531 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d22f7da3-7f61-4333-ae07-e54a750d8f41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-vz5lc" podUID="d22f7da3-7f61-4333-ae07-e54a750d8f41" Sep 4 17:11:28.484855 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12-shm.mount: Deactivated successfully. Sep 4 17:11:34.783409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907859260.mount: Deactivated successfully. Sep 4 17:11:34.843042 containerd[2067]: time="2024-09-04T17:11:34.842972114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:34.844468 containerd[2067]: time="2024-09-04T17:11:34.844397006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Sep 4 17:11:34.845826 containerd[2067]: time="2024-09-04T17:11:34.845749214Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:34.850631 containerd[2067]: time="2024-09-04T17:11:34.850534154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:34.852173 containerd[2067]: time="2024-09-04T17:11:34.851892266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 6.665720421s" Sep 4 17:11:34.852173 containerd[2067]: time="2024-09-04T17:11:34.851948954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Sep 4 17:11:34.879227 containerd[2067]: time="2024-09-04T17:11:34.879014270Z" level=info msg="CreateContainer within sandbox \"f68fe1eac4626f2990f8e198172a96ef0a919003ee75f32a2c33d723857cc8a3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:11:34.902336 containerd[2067]: time="2024-09-04T17:11:34.901830554Z" level=info msg="CreateContainer within sandbox \"f68fe1eac4626f2990f8e198172a96ef0a919003ee75f32a2c33d723857cc8a3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2aa4fb42d73731ce5178eada164c7a407f7ee2d50d2f8cadb0aca6624c2d9372\"" Sep 4 17:11:34.905145 containerd[2067]: time="2024-09-04T17:11:34.905086826Z" level=info msg="StartContainer for \"2aa4fb42d73731ce5178eada164c7a407f7ee2d50d2f8cadb0aca6624c2d9372\"" Sep 4 17:11:34.911870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2003748627.mount: Deactivated successfully. Sep 4 17:11:35.010201 containerd[2067]: time="2024-09-04T17:11:35.009420287Z" level=info msg="StartContainer for \"2aa4fb42d73731ce5178eada164c7a407f7ee2d50d2f8cadb0aca6624c2d9372\" returns successfully" Sep 4 17:11:35.127953 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:11:35.128124 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:11:36.314456 kubelet[3531]: I0904 17:11:36.314411 3531 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:11:36.355498 kubelet[3531]: I0904 17:11:36.355141 3531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-4n2c2" podStartSLOduration=2.61474766 podCreationTimestamp="2024-09-04 17:11:14 +0000 UTC" firstStartedPulling="2024-09-04 17:11:15.111982456 +0000 UTC m=+23.424174069" lastFinishedPulling="2024-09-04 17:11:34.852320078 +0000 UTC m=+43.164511691" observedRunningTime="2024-09-04 17:11:35.294173712 +0000 UTC m=+43.606365349" watchObservedRunningTime="2024-09-04 17:11:36.355085282 +0000 UTC m=+44.667276919" Sep 4 17:11:37.403365 kernel: bpftool[4721]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:11:37.741845 (udev-worker)[4524]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:11:37.745010 systemd-networkd[1610]: vxlan.calico: Link UP Sep 4 17:11:37.745018 systemd-networkd[1610]: vxlan.calico: Gained carrier Sep 4 17:11:37.799671 (udev-worker)[4525]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:11:38.497738 systemd[1]: Started sshd@7-172.31.29.2:22-139.178.89.65:54084.service - OpenSSH per-connection server daemon (139.178.89.65:54084). Sep 4 17:11:38.685967 sshd[4793]: Accepted publickey for core from 139.178.89.65 port 54084 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:38.689168 sshd[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:38.697249 systemd-logind[2036]: New session 8 of user core. Sep 4 17:11:38.707760 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:11:38.943311 systemd-networkd[1610]: vxlan.calico: Gained IPv6LL Sep 4 17:11:38.981012 sshd[4793]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:38.988741 systemd[1]: sshd@7-172.31.29.2:22-139.178.89.65:54084.service: Deactivated successfully. Sep 4 17:11:38.994729 systemd-logind[2036]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:11:38.996444 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:11:38.999039 systemd-logind[2036]: Removed session 8. Sep 4 17:11:40.913842 containerd[2067]: time="2024-09-04T17:11:40.911746088Z" level=info msg="StopPodSandbox for \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\"" Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.017 [INFO][4823] k8s.go 608: Cleaning up netns ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.017 [INFO][4823] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" iface="eth0" netns="/var/run/netns/cni-601d375d-8672-9371-b4fb-2de279430f3a" Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.018 [INFO][4823] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" iface="eth0" netns="/var/run/netns/cni-601d375d-8672-9371-b4fb-2de279430f3a" Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.019 [INFO][4823] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" iface="eth0" netns="/var/run/netns/cni-601d375d-8672-9371-b4fb-2de279430f3a" Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.019 [INFO][4823] k8s.go 615: Releasing IP address(es) ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.019 [INFO][4823] utils.go 188: Calico CNI releasing IP address ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.060 [INFO][4829] ipam_plugin.go 417: Releasing address using handleID ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" HandleID="k8s-pod-network.b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Workload="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.060 [INFO][4829] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.061 [INFO][4829] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.073 [WARNING][4829] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" HandleID="k8s-pod-network.b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Workload="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.073 [INFO][4829] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" HandleID="k8s-pod-network.b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Workload="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.076 [INFO][4829] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:41.083601 containerd[2067]: 2024-09-04 17:11:41.081 [INFO][4823] k8s.go 621: Teardown processing complete. ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:41.087776 containerd[2067]: time="2024-09-04T17:11:41.083824433Z" level=info msg="TearDown network for sandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\" successfully" Sep 4 17:11:41.087776 containerd[2067]: time="2024-09-04T17:11:41.083866073Z" level=info msg="StopPodSandbox for \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\" returns successfully" Sep 4 17:11:41.087776 containerd[2067]: time="2024-09-04T17:11:41.086641385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxpqm,Uid:9f07346e-efe9-4d91-a84b-47e3d545d647,Namespace:calico-system,Attempt:1,}" Sep 4 17:11:41.092953 systemd[1]: run-netns-cni\x2d601d375d\x2d8672\x2d9371\x2db4fb\x2d2de279430f3a.mount: Deactivated successfully. Sep 4 17:11:41.315442 systemd-networkd[1610]: cali39fe665d9ba: Link UP Sep 4 17:11:41.317968 systemd-networkd[1610]: cali39fe665d9ba: Gained carrier Sep 4 17:11:41.320784 (udev-worker)[4855]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.174 [INFO][4840] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0 csi-node-driver- calico-system 9f07346e-efe9-4d91-a84b-47e3d545d647 765 0 2024-09-04 17:11:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-29-2 csi-node-driver-lxpqm eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali39fe665d9ba [] []}} ContainerID="262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" Namespace="calico-system" Pod="csi-node-driver-lxpqm" WorkloadEndpoint="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-" Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.174 [INFO][4840] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" Namespace="calico-system" Pod="csi-node-driver-lxpqm" WorkloadEndpoint="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.227 [INFO][4847] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" HandleID="k8s-pod-network.262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" Workload="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.249 [INFO][4847] ipam_plugin.go 270: Auto assigning IP ContainerID="262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" HandleID="k8s-pod-network.262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" Workload="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003467c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-2", "pod":"csi-node-driver-lxpqm", "timestamp":"2024-09-04 17:11:41.227930142 +0000 UTC"}, Hostname:"ip-172-31-29-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.249 [INFO][4847] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.250 [INFO][4847] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.250 [INFO][4847] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-2' Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.253 [INFO][4847] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" host="ip-172-31-29-2" Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.262 [INFO][4847] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-2" Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.271 [INFO][4847] ipam.go 489: Trying affinity for 192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.274 [INFO][4847] ipam.go 155: Attempting to load block cidr=192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.277 [INFO][4847] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.277 [INFO][4847] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.0/26 handle="k8s-pod-network.262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" host="ip-172-31-29-2" Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.280 [INFO][4847] ipam.go 1685: Creating new handle: k8s-pod-network.262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.287 [INFO][4847] ipam.go 1203: Writing block in order to claim IPs block=192.168.96.0/26 handle="k8s-pod-network.262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" host="ip-172-31-29-2" Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.296 [INFO][4847] ipam.go 1216: Successfully claimed IPs: [192.168.96.1/26] block=192.168.96.0/26 handle="k8s-pod-network.262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" host="ip-172-31-29-2" Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.296 [INFO][4847] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.1/26] handle="k8s-pod-network.262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" host="ip-172-31-29-2" Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.296 [INFO][4847] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:41.362408 containerd[2067]: 2024-09-04 17:11:41.296 [INFO][4847] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.96.1/26] IPv6=[] ContainerID="262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" HandleID="k8s-pod-network.262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" Workload="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:41.364358 containerd[2067]: 2024-09-04 17:11:41.300 [INFO][4840] k8s.go 386: Populated endpoint ContainerID="262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" Namespace="calico-system" Pod="csi-node-driver-lxpqm" WorkloadEndpoint="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f07346e-efe9-4d91-a84b-47e3d545d647", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"", Pod:"csi-node-driver-lxpqm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali39fe665d9ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:41.364358 containerd[2067]: 2024-09-04 17:11:41.300 [INFO][4840] k8s.go 387: Calico CNI using IPs: [192.168.96.1/32] ContainerID="262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" Namespace="calico-system" Pod="csi-node-driver-lxpqm" WorkloadEndpoint="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:41.364358 containerd[2067]: 2024-09-04 17:11:41.300 [INFO][4840] dataplane_linux.go 68: Setting the host side veth name to cali39fe665d9ba ContainerID="262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" Namespace="calico-system" Pod="csi-node-driver-lxpqm" WorkloadEndpoint="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:41.364358 containerd[2067]: 2024-09-04 17:11:41.315 [INFO][4840] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" Namespace="calico-system" Pod="csi-node-driver-lxpqm" WorkloadEndpoint="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:41.364358 containerd[2067]: 2024-09-04 17:11:41.318 [INFO][4840] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" Namespace="calico-system" Pod="csi-node-driver-lxpqm" WorkloadEndpoint="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f07346e-efe9-4d91-a84b-47e3d545d647", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd", Pod:"csi-node-driver-lxpqm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali39fe665d9ba", MAC:"4a:e5:ee:39:d3:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:41.364358 containerd[2067]: 2024-09-04 17:11:41.347 [INFO][4840] k8s.go 500: Wrote updated endpoint to datastore ContainerID="262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd" Namespace="calico-system" Pod="csi-node-driver-lxpqm" WorkloadEndpoint="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:41.417351 containerd[2067]: time="2024-09-04T17:11:41.416682031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:41.417351 containerd[2067]: time="2024-09-04T17:11:41.416795911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:41.417351 containerd[2067]: time="2024-09-04T17:11:41.416837083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:41.417351 containerd[2067]: time="2024-09-04T17:11:41.416871055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:41.573696 containerd[2067]: time="2024-09-04T17:11:41.573313939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxpqm,Uid:9f07346e-efe9-4d91-a84b-47e3d545d647,Namespace:calico-system,Attempt:1,} returns sandbox id \"262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd\"" Sep 4 17:11:41.581677 containerd[2067]: time="2024-09-04T17:11:41.581510588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:11:41.915618 containerd[2067]: time="2024-09-04T17:11:41.912761097Z" level=info msg="StopPodSandbox for \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\"" Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.008 [INFO][4924] k8s.go 608: Cleaning up netns ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.009 [INFO][4924] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" iface="eth0" netns="/var/run/netns/cni-1356ca50-767d-a666-b46e-aa6af088e112" Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.009 [INFO][4924] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" iface="eth0" netns="/var/run/netns/cni-1356ca50-767d-a666-b46e-aa6af088e112" Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.009 [INFO][4924] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" iface="eth0" netns="/var/run/netns/cni-1356ca50-767d-a666-b46e-aa6af088e112" Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.010 [INFO][4924] k8s.go 615: Releasing IP address(es) ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.010 [INFO][4924] utils.go 188: Calico CNI releasing IP address ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.049 [INFO][4931] ipam_plugin.go 417: Releasing address using handleID ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" HandleID="k8s-pod-network.3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Workload="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.050 [INFO][4931] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.050 [INFO][4931] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.063 [WARNING][4931] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" HandleID="k8s-pod-network.3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Workload="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.063 [INFO][4931] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" HandleID="k8s-pod-network.3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Workload="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.066 [INFO][4931] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:42.073739 containerd[2067]: 2024-09-04 17:11:42.069 [INFO][4924] k8s.go 621: Teardown processing complete. ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:42.075199 containerd[2067]: time="2024-09-04T17:11:42.074066550Z" level=info msg="TearDown network for sandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\" successfully" Sep 4 17:11:42.075199 containerd[2067]: time="2024-09-04T17:11:42.074105178Z" level=info msg="StopPodSandbox for \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\" returns successfully" Sep 4 17:11:42.075453 containerd[2067]: time="2024-09-04T17:11:42.075383742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757c5dc566-25jwl,Uid:66a18fea-3d72-4ed1-bd9b-d7382885dcbb,Namespace:calico-system,Attempt:1,}" Sep 4 17:11:42.090604 systemd[1]: run-netns-cni\x2d1356ca50\x2d767d\x2da666\x2db46e\x2daa6af088e112.mount: Deactivated successfully. Sep 4 17:11:42.398481 systemd-networkd[1610]: cali39fe665d9ba: Gained IPv6LL Sep 4 17:11:42.414210 systemd-networkd[1610]: cali382ea1bc722: Link UP Sep 4 17:11:42.415800 systemd-networkd[1610]: cali382ea1bc722: Gained carrier Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.282 [INFO][4938] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0 calico-kube-controllers-757c5dc566- calico-system 66a18fea-3d72-4ed1-bd9b-d7382885dcbb 776 0 2024-09-04 17:11:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:757c5dc566 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-2 calico-kube-controllers-757c5dc566-25jwl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali382ea1bc722 [] []}} ContainerID="7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" Namespace="calico-system" Pod="calico-kube-controllers-757c5dc566-25jwl" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-" Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.282 [INFO][4938] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" Namespace="calico-system" Pod="calico-kube-controllers-757c5dc566-25jwl" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.333 [INFO][4949] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" HandleID="k8s-pod-network.7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" Workload="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.359 [INFO][4949] ipam_plugin.go 270: Auto assigning IP ContainerID="7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" HandleID="k8s-pod-network.7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" Workload="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000263de0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-2", "pod":"calico-kube-controllers-757c5dc566-25jwl", "timestamp":"2024-09-04 17:11:42.333853915 +0000 UTC"}, Hostname:"ip-172-31-29-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.359 [INFO][4949] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.359 [INFO][4949] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.359 [INFO][4949] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-2' Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.362 [INFO][4949] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" host="ip-172-31-29-2" Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.368 [INFO][4949] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-2" Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.375 [INFO][4949] ipam.go 489: Trying affinity for 192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.378 [INFO][4949] ipam.go 155: Attempting to load block cidr=192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.381 [INFO][4949] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.381 [INFO][4949] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.0/26 handle="k8s-pod-network.7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" host="ip-172-31-29-2" Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.386 [INFO][4949] ipam.go 1685: Creating new handle: k8s-pod-network.7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7 Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.392 [INFO][4949] ipam.go 1203: Writing block in order to claim IPs block=192.168.96.0/26 handle="k8s-pod-network.7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" host="ip-172-31-29-2" Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.403 [INFO][4949] ipam.go 1216: Successfully claimed IPs: [192.168.96.2/26] block=192.168.96.0/26 handle="k8s-pod-network.7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" host="ip-172-31-29-2" Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.403 [INFO][4949] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.2/26] handle="k8s-pod-network.7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" host="ip-172-31-29-2" Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.404 [INFO][4949] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:42.463322 containerd[2067]: 2024-09-04 17:11:42.404 [INFO][4949] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.96.2/26] IPv6=[] ContainerID="7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" HandleID="k8s-pod-network.7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" Workload="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:42.464579 containerd[2067]: 2024-09-04 17:11:42.408 [INFO][4938] k8s.go 386: Populated endpoint ContainerID="7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" Namespace="calico-system" Pod="calico-kube-controllers-757c5dc566-25jwl" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0", GenerateName:"calico-kube-controllers-757c5dc566-", Namespace:"calico-system", SelfLink:"", UID:"66a18fea-3d72-4ed1-bd9b-d7382885dcbb", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"757c5dc566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"", Pod:"calico-kube-controllers-757c5dc566-25jwl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali382ea1bc722", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:42.464579 containerd[2067]: 2024-09-04 17:11:42.408 [INFO][4938] k8s.go 387: Calico CNI using IPs: [192.168.96.2/32] ContainerID="7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" Namespace="calico-system" Pod="calico-kube-controllers-757c5dc566-25jwl" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:42.464579 containerd[2067]: 2024-09-04 17:11:42.408 [INFO][4938] dataplane_linux.go 68: Setting the host side veth name to cali382ea1bc722 ContainerID="7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" Namespace="calico-system" Pod="calico-kube-controllers-757c5dc566-25jwl" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:42.464579 containerd[2067]: 2024-09-04 17:11:42.416 [INFO][4938] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" Namespace="calico-system" Pod="calico-kube-controllers-757c5dc566-25jwl" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:42.464579 containerd[2067]: 2024-09-04 17:11:42.419 [INFO][4938] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" Namespace="calico-system" Pod="calico-kube-controllers-757c5dc566-25jwl" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0", GenerateName:"calico-kube-controllers-757c5dc566-", Namespace:"calico-system", SelfLink:"", UID:"66a18fea-3d72-4ed1-bd9b-d7382885dcbb", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"757c5dc566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7", Pod:"calico-kube-controllers-757c5dc566-25jwl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali382ea1bc722", MAC:"aa:70:29:3a:66:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:42.464579 containerd[2067]: 2024-09-04 17:11:42.455 [INFO][4938] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7" Namespace="calico-system" Pod="calico-kube-controllers-757c5dc566-25jwl" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:42.542308 containerd[2067]: time="2024-09-04T17:11:42.541691984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:42.544089 containerd[2067]: time="2024-09-04T17:11:42.542335112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:42.544228 containerd[2067]: time="2024-09-04T17:11:42.544145624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:42.544323 containerd[2067]: time="2024-09-04T17:11:42.544217396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:42.645398 containerd[2067]: time="2024-09-04T17:11:42.645231501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757c5dc566-25jwl,Uid:66a18fea-3d72-4ed1-bd9b-d7382885dcbb,Namespace:calico-system,Attempt:1,} returns sandbox id \"7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7\"" Sep 4 17:11:42.913110 containerd[2067]: time="2024-09-04T17:11:42.912590062Z" level=info msg="StopPodSandbox for \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\"" Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.134 [INFO][5024] k8s.go 608: Cleaning up netns ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.134 [INFO][5024] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" iface="eth0" netns="/var/run/netns/cni-6f5866f9-26d9-6be5-778b-2333ec11595c" Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.136 [INFO][5024] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" iface="eth0" netns="/var/run/netns/cni-6f5866f9-26d9-6be5-778b-2333ec11595c" Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.137 [INFO][5024] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" iface="eth0" netns="/var/run/netns/cni-6f5866f9-26d9-6be5-778b-2333ec11595c" Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.137 [INFO][5024] k8s.go 615: Releasing IP address(es) ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.137 [INFO][5024] utils.go 188: Calico CNI releasing IP address ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.237 [INFO][5035] ipam_plugin.go 417: Releasing address using handleID ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" HandleID="k8s-pod-network.338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.238 [INFO][5035] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.238 [INFO][5035] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.251 [WARNING][5035] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" HandleID="k8s-pod-network.338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.252 [INFO][5035] ipam_plugin.go 445: Releasing address using workloadID ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" HandleID="k8s-pod-network.338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.255 [INFO][5035] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:43.266445 containerd[2067]: 2024-09-04 17:11:43.259 [INFO][5024] k8s.go 621: Teardown processing complete. ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:43.273807 containerd[2067]: time="2024-09-04T17:11:43.267351872Z" level=info msg="TearDown network for sandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\" successfully" Sep 4 17:11:43.273807 containerd[2067]: time="2024-09-04T17:11:43.268420748Z" level=info msg="StopPodSandbox for \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\" returns successfully" Sep 4 17:11:43.273807 containerd[2067]: time="2024-09-04T17:11:43.271543808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xt7z7,Uid:d6f19b88-2c58-4fc4-8440-e3af1c017a65,Namespace:kube-system,Attempt:1,}" Sep 4 17:11:43.274640 systemd[1]: run-netns-cni\x2d6f5866f9\x2d26d9\x2d6be5\x2d778b\x2d2333ec11595c.mount: Deactivated successfully. Sep 4 17:11:43.375307 containerd[2067]: time="2024-09-04T17:11:43.374349536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:43.379883 containerd[2067]: time="2024-09-04T17:11:43.379794428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Sep 4 17:11:43.382955 containerd[2067]: time="2024-09-04T17:11:43.382870388Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:43.386758 containerd[2067]: time="2024-09-04T17:11:43.386567900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:43.389039 containerd[2067]: time="2024-09-04T17:11:43.388982684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.80735746s" Sep 4 17:11:43.389441 containerd[2067]: time="2024-09-04T17:11:43.389247116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Sep 4 17:11:43.392909 containerd[2067]: time="2024-09-04T17:11:43.390987812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:11:43.399150 containerd[2067]: time="2024-09-04T17:11:43.398920677Z" level=info msg="CreateContainer within sandbox \"262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:11:43.445664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3450428857.mount: Deactivated successfully. Sep 4 17:11:43.449380 containerd[2067]: time="2024-09-04T17:11:43.447242505Z" level=info msg="CreateContainer within sandbox \"262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a3defdf16136f952176b0734253b6d59a707412948fd56881c3540f23dbe50af\"" Sep 4 17:11:43.450229 containerd[2067]: time="2024-09-04T17:11:43.450166413Z" level=info msg="StartContainer for \"a3defdf16136f952176b0734253b6d59a707412948fd56881c3540f23dbe50af\"" Sep 4 17:11:43.604572 systemd-networkd[1610]: cali0090d494f70: Link UP Sep 4 17:11:43.604953 systemd-networkd[1610]: cali0090d494f70: Gained carrier Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.413 [INFO][5052] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0 coredns-5dd5756b68- kube-system d6f19b88-2c58-4fc4-8440-e3af1c017a65 790 0 2024-09-04 17:11:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-2 coredns-5dd5756b68-xt7z7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0090d494f70 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" Namespace="kube-system" Pod="coredns-5dd5756b68-xt7z7" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-" Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.413 [INFO][5052] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" Namespace="kube-system" Pod="coredns-5dd5756b68-xt7z7" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.510 [INFO][5061] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" HandleID="k8s-pod-network.2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.544 [INFO][5061] ipam_plugin.go 270: Auto assigning IP ContainerID="2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" HandleID="k8s-pod-network.2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400026c5b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-2", "pod":"coredns-5dd5756b68-xt7z7", "timestamp":"2024-09-04 17:11:43.510001893 +0000 UTC"}, Hostname:"ip-172-31-29-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.544 [INFO][5061] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.544 [INFO][5061] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.544 [INFO][5061] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-2' Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.550 [INFO][5061] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" host="ip-172-31-29-2" Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.560 [INFO][5061] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-2" Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.569 [INFO][5061] ipam.go 489: Trying affinity for 192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.572 [INFO][5061] ipam.go 155: Attempting to load block cidr=192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.576 [INFO][5061] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.576 [INFO][5061] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.0/26 handle="k8s-pod-network.2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" host="ip-172-31-29-2" Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.580 [INFO][5061] ipam.go 1685: Creating new handle: k8s-pod-network.2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41 Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.585 [INFO][5061] ipam.go 1203: Writing block in order to claim IPs block=192.168.96.0/26 handle="k8s-pod-network.2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" host="ip-172-31-29-2" Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.593 [INFO][5061] ipam.go 1216: Successfully claimed IPs: [192.168.96.3/26] block=192.168.96.0/26 handle="k8s-pod-network.2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" host="ip-172-31-29-2" Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.593 [INFO][5061] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.3/26] handle="k8s-pod-network.2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" host="ip-172-31-29-2" Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.593 [INFO][5061] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:43.647165 containerd[2067]: 2024-09-04 17:11:43.593 [INFO][5061] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.96.3/26] IPv6=[] ContainerID="2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" HandleID="k8s-pod-network.2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:43.649773 containerd[2067]: 2024-09-04 17:11:43.599 [INFO][5052] k8s.go 386: Populated endpoint ContainerID="2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" Namespace="kube-system" Pod="coredns-5dd5756b68-xt7z7" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d6f19b88-2c58-4fc4-8440-e3af1c017a65", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"", Pod:"coredns-5dd5756b68-xt7z7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0090d494f70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:43.649773 containerd[2067]: 2024-09-04 17:11:43.599 [INFO][5052] k8s.go 387: Calico CNI using IPs: [192.168.96.3/32] ContainerID="2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" Namespace="kube-system" Pod="coredns-5dd5756b68-xt7z7" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:43.649773 containerd[2067]: 2024-09-04 17:11:43.599 [INFO][5052] dataplane_linux.go 68: Setting the host side veth name to cali0090d494f70 ContainerID="2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" Namespace="kube-system" Pod="coredns-5dd5756b68-xt7z7" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:43.649773 containerd[2067]: 2024-09-04 17:11:43.603 [INFO][5052] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" Namespace="kube-system" Pod="coredns-5dd5756b68-xt7z7" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:43.649773 containerd[2067]: 2024-09-04 17:11:43.608 [INFO][5052] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" Namespace="kube-system" Pod="coredns-5dd5756b68-xt7z7" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d6f19b88-2c58-4fc4-8440-e3af1c017a65", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41", Pod:"coredns-5dd5756b68-xt7z7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0090d494f70", MAC:"9a:33:97:50:db:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:43.649773 containerd[2067]: 2024-09-04 17:11:43.634 [INFO][5052] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41" Namespace="kube-system" Pod="coredns-5dd5756b68-xt7z7" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:43.696017 containerd[2067]: time="2024-09-04T17:11:43.695953162Z" level=info msg="StartContainer for \"a3defdf16136f952176b0734253b6d59a707412948fd56881c3540f23dbe50af\" returns successfully" Sep 4 17:11:43.730466 containerd[2067]: time="2024-09-04T17:11:43.730205494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:43.730466 containerd[2067]: time="2024-09-04T17:11:43.730380670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:43.730763 containerd[2067]: time="2024-09-04T17:11:43.730434070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:43.730763 containerd[2067]: time="2024-09-04T17:11:43.730469086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:43.825397 containerd[2067]: time="2024-09-04T17:11:43.825229403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xt7z7,Uid:d6f19b88-2c58-4fc4-8440-e3af1c017a65,Namespace:kube-system,Attempt:1,} returns sandbox id \"2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41\"" Sep 4 17:11:43.833166 containerd[2067]: time="2024-09-04T17:11:43.833027531Z" level=info msg="CreateContainer within sandbox \"2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:11:43.849070 containerd[2067]: time="2024-09-04T17:11:43.848890355Z" level=info msg="CreateContainer within sandbox \"2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a96faeff1e9baac6ab6c77e80a260308e8e94e7ddcea2ce8840341039fd41429\"" Sep 4 17:11:43.850449 containerd[2067]: time="2024-09-04T17:11:43.850358303Z" level=info msg="StartContainer for \"a96faeff1e9baac6ab6c77e80a260308e8e94e7ddcea2ce8840341039fd41429\"" Sep 4 17:11:43.915921 containerd[2067]: time="2024-09-04T17:11:43.914075531Z" level=info msg="StopPodSandbox for \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\"" Sep 4 17:11:43.958606 containerd[2067]: time="2024-09-04T17:11:43.958508483Z" level=info msg="StartContainer for \"a96faeff1e9baac6ab6c77e80a260308e8e94e7ddcea2ce8840341039fd41429\" returns successfully" Sep 4 17:11:44.021045 systemd[1]: Started sshd@8-172.31.29.2:22-139.178.89.65:54088.service - OpenSSH per-connection server daemon (139.178.89.65:54088). Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.053 [INFO][5187] k8s.go 608: Cleaning up netns ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.053 [INFO][5187] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" iface="eth0" netns="/var/run/netns/cni-c057c900-07cc-39cf-e7bc-25f9665e4463" Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.054 [INFO][5187] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" iface="eth0" netns="/var/run/netns/cni-c057c900-07cc-39cf-e7bc-25f9665e4463" Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.055 [INFO][5187] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" iface="eth0" netns="/var/run/netns/cni-c057c900-07cc-39cf-e7bc-25f9665e4463" Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.055 [INFO][5187] k8s.go 615: Releasing IP address(es) ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.056 [INFO][5187] utils.go 188: Calico CNI releasing IP address ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.099 [INFO][5207] ipam_plugin.go 417: Releasing address using handleID ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" HandleID="k8s-pod-network.a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.099 [INFO][5207] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.099 [INFO][5207] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.111 [WARNING][5207] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" HandleID="k8s-pod-network.a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.111 [INFO][5207] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" HandleID="k8s-pod-network.a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.114 [INFO][5207] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:44.126650 containerd[2067]: 2024-09-04 17:11:44.116 [INFO][5187] k8s.go 621: Teardown processing complete. ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:44.129521 containerd[2067]: time="2024-09-04T17:11:44.128430044Z" level=info msg="TearDown network for sandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\" successfully" Sep 4 17:11:44.129521 containerd[2067]: time="2024-09-04T17:11:44.128910860Z" level=info msg="StopPodSandbox for \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\" returns successfully" Sep 4 17:11:44.132489 containerd[2067]: time="2024-09-04T17:11:44.131909540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vz5lc,Uid:d22f7da3-7f61-4333-ae07-e54a750d8f41,Namespace:kube-system,Attempt:1,}" Sep 4 17:11:44.238762 sshd[5205]: Accepted publickey for core from 139.178.89.65 port 54088 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:44.244967 sshd[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:44.254558 systemd-networkd[1610]: cali382ea1bc722: Gained IPv6LL Sep 4 17:11:44.258413 systemd-logind[2036]: New session 9 of user core. Sep 4 17:11:44.265014 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:11:44.306545 systemd[1]: run-netns-cni\x2dc057c900\x2d07cc\x2d39cf\x2de7bc\x2d25f9665e4463.mount: Deactivated successfully. Sep 4 17:11:44.383010 kubelet[3531]: I0904 17:11:44.382252 3531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xt7z7" podStartSLOduration=39.382167213 podCreationTimestamp="2024-09-04 17:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:44.377252277 +0000 UTC m=+52.689443926" watchObservedRunningTime="2024-09-04 17:11:44.382167213 +0000 UTC m=+52.694358838" Sep 4 17:11:44.439899 systemd-networkd[1610]: cali4c49eec6212: Link UP Sep 4 17:11:44.440369 systemd-networkd[1610]: cali4c49eec6212: Gained carrier Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.210 [INFO][5214] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0 coredns-5dd5756b68- kube-system d22f7da3-7f61-4333-ae07-e54a750d8f41 804 0 2024-09-04 17:11:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-2 coredns-5dd5756b68-vz5lc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c49eec6212 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" Namespace="kube-system" Pod="coredns-5dd5756b68-vz5lc" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-" Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.211 [INFO][5214] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" Namespace="kube-system" Pod="coredns-5dd5756b68-vz5lc" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.303 [INFO][5226] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" HandleID="k8s-pod-network.1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.332 [INFO][5226] ipam_plugin.go 270: Auto assigning IP ContainerID="1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" HandleID="k8s-pod-network.1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005c89a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-2", "pod":"coredns-5dd5756b68-vz5lc", "timestamp":"2024-09-04 17:11:44.303887457 +0000 UTC"}, Hostname:"ip-172-31-29-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.332 [INFO][5226] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.332 [INFO][5226] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.332 [INFO][5226] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-2' Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.336 [INFO][5226] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" host="ip-172-31-29-2" Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.347 [INFO][5226] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-2" Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.358 [INFO][5226] ipam.go 489: Trying affinity for 192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.367 [INFO][5226] ipam.go 155: Attempting to load block cidr=192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.378 [INFO][5226] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.378 [INFO][5226] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.0/26 handle="k8s-pod-network.1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" host="ip-172-31-29-2" Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.388 [INFO][5226] ipam.go 1685: Creating new handle: k8s-pod-network.1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18 Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.400 [INFO][5226] ipam.go 1203: Writing block in order to claim IPs block=192.168.96.0/26 handle="k8s-pod-network.1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" host="ip-172-31-29-2" Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.421 [INFO][5226] ipam.go 1216: Successfully claimed IPs: [192.168.96.4/26] block=192.168.96.0/26 handle="k8s-pod-network.1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" host="ip-172-31-29-2" Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.421 [INFO][5226] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.4/26] handle="k8s-pod-network.1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" host="ip-172-31-29-2" Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.422 [INFO][5226] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:44.492409 containerd[2067]: 2024-09-04 17:11:44.422 [INFO][5226] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.96.4/26] IPv6=[] ContainerID="1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" HandleID="k8s-pod-network.1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:44.498150 containerd[2067]: 2024-09-04 17:11:44.431 [INFO][5214] k8s.go 386: Populated endpoint ContainerID="1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" Namespace="kube-system" Pod="coredns-5dd5756b68-vz5lc" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d22f7da3-7f61-4333-ae07-e54a750d8f41", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"", Pod:"coredns-5dd5756b68-vz5lc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c49eec6212", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:44.498150 containerd[2067]: 2024-09-04 17:11:44.432 [INFO][5214] k8s.go 387: Calico CNI using IPs: [192.168.96.4/32] ContainerID="1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" Namespace="kube-system" Pod="coredns-5dd5756b68-vz5lc" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:44.498150 containerd[2067]: 2024-09-04 17:11:44.432 [INFO][5214] dataplane_linux.go 68: Setting the host side veth name to cali4c49eec6212 ContainerID="1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" Namespace="kube-system" Pod="coredns-5dd5756b68-vz5lc" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:44.498150 containerd[2067]: 2024-09-04 17:11:44.439 [INFO][5214] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" Namespace="kube-system" Pod="coredns-5dd5756b68-vz5lc" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:44.498150 containerd[2067]: 2024-09-04 17:11:44.441 [INFO][5214] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" Namespace="kube-system" Pod="coredns-5dd5756b68-vz5lc" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d22f7da3-7f61-4333-ae07-e54a750d8f41", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18", Pod:"coredns-5dd5756b68-vz5lc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c49eec6212", MAC:"22:d7:b5:3c:49:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:44.498150 containerd[2067]: 2024-09-04 17:11:44.477 [INFO][5214] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18" Namespace="kube-system" Pod="coredns-5dd5756b68-vz5lc" WorkloadEndpoint="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:44.579835 containerd[2067]: time="2024-09-04T17:11:44.579630538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:44.582057 containerd[2067]: time="2024-09-04T17:11:44.581276470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:44.582057 containerd[2067]: time="2024-09-04T17:11:44.581331394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:44.582057 containerd[2067]: time="2024-09-04T17:11:44.581357026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:44.740096 containerd[2067]: time="2024-09-04T17:11:44.740025695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vz5lc,Uid:d22f7da3-7f61-4333-ae07-e54a750d8f41,Namespace:kube-system,Attempt:1,} returns sandbox id \"1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18\"" Sep 4 17:11:44.747172 containerd[2067]: time="2024-09-04T17:11:44.746306471Z" level=info msg="CreateContainer within sandbox \"1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:11:44.753448 sshd[5205]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:44.771742 containerd[2067]: time="2024-09-04T17:11:44.767616839Z" level=info msg="CreateContainer within sandbox \"1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8268d188a15987c33a4060c92da93390f1a76af234b0dbf846313b0c52652138\"" Sep 4 17:11:44.778332 containerd[2067]: time="2024-09-04T17:11:44.775216655Z" level=info msg="StartContainer for \"8268d188a15987c33a4060c92da93390f1a76af234b0dbf846313b0c52652138\"" Sep 4 17:11:44.783094 systemd[1]: sshd@8-172.31.29.2:22-139.178.89.65:54088.service: Deactivated successfully. Sep 4 17:11:44.790348 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:11:44.792761 systemd-logind[2036]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:11:44.796816 systemd-logind[2036]: Removed session 9. Sep 4 17:11:44.881033 containerd[2067]: time="2024-09-04T17:11:44.880829820Z" level=info msg="StartContainer for \"8268d188a15987c33a4060c92da93390f1a76af234b0dbf846313b0c52652138\" returns successfully" Sep 4 17:11:45.413380 systemd-networkd[1610]: cali0090d494f70: Gained IPv6LL Sep 4 17:11:45.477186 kubelet[3531]: I0904 17:11:45.473386 3531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vz5lc" podStartSLOduration=40.473051639 podCreationTimestamp="2024-09-04 17:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:45.471473219 +0000 UTC m=+53.783664868" watchObservedRunningTime="2024-09-04 17:11:45.473051639 +0000 UTC m=+53.785243276" Sep 4 17:11:46.110631 systemd-networkd[1610]: cali4c49eec6212: Gained IPv6LL Sep 4 17:11:46.855843 containerd[2067]: time="2024-09-04T17:11:46.853541246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:46.857418 containerd[2067]: time="2024-09-04T17:11:46.856840598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Sep 4 17:11:46.858120 containerd[2067]: time="2024-09-04T17:11:46.858073322Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:46.867557 containerd[2067]: time="2024-09-04T17:11:46.867471998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:46.871851 containerd[2067]: time="2024-09-04T17:11:46.871365482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 3.480301794s" Sep 4 17:11:46.872402 containerd[2067]: time="2024-09-04T17:11:46.872246618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Sep 4 17:11:46.875380 containerd[2067]: time="2024-09-04T17:11:46.875037386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:11:46.962448 containerd[2067]: time="2024-09-04T17:11:46.962357462Z" level=info msg="CreateContainer within sandbox \"7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:11:47.008884 containerd[2067]: time="2024-09-04T17:11:47.008395486Z" level=info msg="CreateContainer within sandbox \"7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fa5267829b93b168c0320dee757fff0152f5c094fa9a7016b64d55f9c8a30ffe\"" Sep 4 17:11:47.022167 containerd[2067]: time="2024-09-04T17:11:47.020694323Z" level=info msg="StartContainer for \"fa5267829b93b168c0320dee757fff0152f5c094fa9a7016b64d55f9c8a30ffe\"" Sep 4 17:11:47.251842 containerd[2067]: time="2024-09-04T17:11:47.251778168Z" level=info msg="StartContainer for \"fa5267829b93b168c0320dee757fff0152f5c094fa9a7016b64d55f9c8a30ffe\" returns successfully" Sep 4 17:11:47.511218 kubelet[3531]: I0904 17:11:47.510731 3531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-757c5dc566-25jwl" podStartSLOduration=29.283828352 podCreationTimestamp="2024-09-04 17:11:14 +0000 UTC" firstStartedPulling="2024-09-04 17:11:42.647929017 +0000 UTC m=+50.960120642" lastFinishedPulling="2024-09-04 17:11:46.873605414 +0000 UTC m=+55.185797051" observedRunningTime="2024-09-04 17:11:47.505395781 +0000 UTC m=+55.817587430" watchObservedRunningTime="2024-09-04 17:11:47.509504761 +0000 UTC m=+55.821696410" Sep 4 17:11:48.785202 containerd[2067]: time="2024-09-04T17:11:48.785130951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:48.789608 containerd[2067]: time="2024-09-04T17:11:48.789537027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Sep 4 17:11:48.791247 containerd[2067]: time="2024-09-04T17:11:48.790911543Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:48.801065 containerd[2067]: time="2024-09-04T17:11:48.800745567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:48.804228 containerd[2067]: time="2024-09-04T17:11:48.803973951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.928795625s" Sep 4 17:11:48.804228 containerd[2067]: time="2024-09-04T17:11:48.804086943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Sep 4 17:11:48.811583 containerd[2067]: time="2024-09-04T17:11:48.811495947Z" level=info msg="CreateContainer within sandbox \"262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:11:48.837947 containerd[2067]: time="2024-09-04T17:11:48.837700348Z" level=info msg="CreateContainer within sandbox \"262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"89770bac24d46ee14ace92ae58a4c0254fe13da0cef4bd87dea2e0d899b0cac0\"" Sep 4 17:11:48.842962 containerd[2067]: time="2024-09-04T17:11:48.842887396Z" level=info msg="StartContainer for \"89770bac24d46ee14ace92ae58a4c0254fe13da0cef4bd87dea2e0d899b0cac0\"" Sep 4 17:11:48.930086 systemd-journald[1529]: Under memory pressure, flushing caches. Sep 4 17:11:48.926396 systemd-resolved[1942]: Under memory pressure, flushing caches. Sep 4 17:11:48.926479 systemd-resolved[1942]: Flushed all caches. Sep 4 17:11:48.942926 ntpd[2018]: Listen normally on 6 vxlan.calico 192.168.96.0:123 Sep 4 17:11:48.954163 ntpd[2018]: 4 Sep 17:11:48 ntpd[2018]: Listen normally on 6 vxlan.calico 192.168.96.0:123 Sep 4 17:11:48.954163 ntpd[2018]: 4 Sep 17:11:48 ntpd[2018]: Listen normally on 7 vxlan.calico [fe80::64b7:6bff:fe1c:65c%4]:123 Sep 4 17:11:48.954163 ntpd[2018]: 4 Sep 17:11:48 ntpd[2018]: Listen normally on 8 cali39fe665d9ba [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:11:48.954163 ntpd[2018]: 4 Sep 17:11:48 ntpd[2018]: Listen normally on 9 cali382ea1bc722 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:11:48.954163 ntpd[2018]: 4 Sep 17:11:48 ntpd[2018]: Listen normally on 10 cali0090d494f70 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:11:48.954163 ntpd[2018]: 4 Sep 17:11:48 ntpd[2018]: Listen normally on 11 cali4c49eec6212 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:11:48.943058 ntpd[2018]: Listen normally on 7 vxlan.calico [fe80::64b7:6bff:fe1c:65c%4]:123 Sep 4 17:11:48.943143 ntpd[2018]: Listen normally on 8 cali39fe665d9ba [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:11:48.943219 ntpd[2018]: Listen normally on 9 cali382ea1bc722 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:11:48.943320 ntpd[2018]: Listen normally on 10 cali0090d494f70 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:11:48.943394 ntpd[2018]: Listen normally on 11 cali4c49eec6212 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:11:49.063608 containerd[2067]: time="2024-09-04T17:11:49.061746661Z" level=info msg="StartContainer for \"89770bac24d46ee14ace92ae58a4c0254fe13da0cef4bd87dea2e0d899b0cac0\" returns successfully" Sep 4 17:11:49.238715 kubelet[3531]: I0904 17:11:49.238025 3531 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:11:49.238715 kubelet[3531]: I0904 17:11:49.238082 3531 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:11:49.482583 kubelet[3531]: I0904 17:11:49.480239 3531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-lxpqm" podStartSLOduration=28.251846803 podCreationTimestamp="2024-09-04 17:11:14 +0000 UTC" firstStartedPulling="2024-09-04 17:11:41.576104287 +0000 UTC m=+49.888295900" lastFinishedPulling="2024-09-04 17:11:48.804440943 +0000 UTC m=+57.116632580" observedRunningTime="2024-09-04 17:11:49.479777679 +0000 UTC m=+57.791969400" watchObservedRunningTime="2024-09-04 17:11:49.480183483 +0000 UTC m=+57.792375132" Sep 4 17:11:49.786792 systemd[1]: Started sshd@9-172.31.29.2:22-139.178.89.65:33274.service - OpenSSH per-connection server daemon (139.178.89.65:33274). Sep 4 17:11:49.981395 sshd[5476]: Accepted publickey for core from 139.178.89.65 port 33274 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:49.983735 sshd[5476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:50.002087 systemd-logind[2036]: New session 10 of user core. Sep 4 17:11:50.008594 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:11:50.308591 sshd[5476]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:50.316947 systemd[1]: sshd@9-172.31.29.2:22-139.178.89.65:33274.service: Deactivated successfully. Sep 4 17:11:50.325940 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:11:50.331010 systemd-logind[2036]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:11:50.346762 systemd[1]: Started sshd@10-172.31.29.2:22-139.178.89.65:33280.service - OpenSSH per-connection server daemon (139.178.89.65:33280). Sep 4 17:11:50.352377 systemd-logind[2036]: Removed session 10. Sep 4 17:11:50.533285 sshd[5491]: Accepted publickey for core from 139.178.89.65 port 33280 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:50.536484 sshd[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:50.549918 systemd-logind[2036]: New session 11 of user core. Sep 4 17:11:50.556918 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:11:51.381521 sshd[5491]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:51.402878 systemd-logind[2036]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:11:51.404395 systemd[1]: sshd@10-172.31.29.2:22-139.178.89.65:33280.service: Deactivated successfully. Sep 4 17:11:51.414391 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:11:51.432981 systemd[1]: Started sshd@11-172.31.29.2:22-139.178.89.65:33288.service - OpenSSH per-connection server daemon (139.178.89.65:33288). Sep 4 17:11:51.438683 systemd-logind[2036]: Removed session 11. Sep 4 17:11:51.646388 sshd[5503]: Accepted publickey for core from 139.178.89.65 port 33288 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:51.646180 sshd[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:51.668741 systemd-logind[2036]: New session 12 of user core. Sep 4 17:11:51.673415 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:11:51.950104 containerd[2067]: time="2024-09-04T17:11:51.949911739Z" level=info msg="StopPodSandbox for \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\"" Sep 4 17:11:51.968718 sshd[5503]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:51.984794 systemd[1]: sshd@11-172.31.29.2:22-139.178.89.65:33288.service: Deactivated successfully. Sep 4 17:11:51.987501 systemd-logind[2036]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:11:52.000712 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:11:52.004732 systemd-logind[2036]: Removed session 12. Sep 4 17:11:52.168033 containerd[2067]: 2024-09-04 17:11:52.087 [WARNING][5531] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0", GenerateName:"calico-kube-controllers-757c5dc566-", Namespace:"calico-system", SelfLink:"", UID:"66a18fea-3d72-4ed1-bd9b-d7382885dcbb", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"757c5dc566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7", Pod:"calico-kube-controllers-757c5dc566-25jwl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali382ea1bc722", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:52.168033 containerd[2067]: 2024-09-04 17:11:52.087 [INFO][5531] k8s.go 608: Cleaning up netns ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:52.168033 containerd[2067]: 2024-09-04 17:11:52.087 [INFO][5531] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" iface="eth0" netns="" Sep 4 17:11:52.168033 containerd[2067]: 2024-09-04 17:11:52.087 [INFO][5531] k8s.go 615: Releasing IP address(es) ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:52.168033 containerd[2067]: 2024-09-04 17:11:52.088 [INFO][5531] utils.go 188: Calico CNI releasing IP address ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:52.168033 containerd[2067]: 2024-09-04 17:11:52.136 [INFO][5538] ipam_plugin.go 417: Releasing address using handleID ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" HandleID="k8s-pod-network.3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Workload="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:52.168033 containerd[2067]: 2024-09-04 17:11:52.137 [INFO][5538] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:52.168033 containerd[2067]: 2024-09-04 17:11:52.137 [INFO][5538] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:52.168033 containerd[2067]: 2024-09-04 17:11:52.151 [WARNING][5538] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" HandleID="k8s-pod-network.3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Workload="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:52.168033 containerd[2067]: 2024-09-04 17:11:52.151 [INFO][5538] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" HandleID="k8s-pod-network.3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Workload="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:52.168033 containerd[2067]: 2024-09-04 17:11:52.153 [INFO][5538] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:52.168033 containerd[2067]: 2024-09-04 17:11:52.159 [INFO][5531] k8s.go 621: Teardown processing complete. ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:52.170434 containerd[2067]: time="2024-09-04T17:11:52.168171568Z" level=info msg="TearDown network for sandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\" successfully" Sep 4 17:11:52.170434 containerd[2067]: time="2024-09-04T17:11:52.168326680Z" level=info msg="StopPodSandbox for \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\" returns successfully" Sep 4 17:11:52.170434 containerd[2067]: time="2024-09-04T17:11:52.169901848Z" level=info msg="RemovePodSandbox for \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\"" Sep 4 17:11:52.170434 containerd[2067]: time="2024-09-04T17:11:52.170035912Z" level=info msg="Forcibly stopping sandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\"" Sep 4 17:11:52.370432 containerd[2067]: 2024-09-04 17:11:52.279 [WARNING][5556] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0", GenerateName:"calico-kube-controllers-757c5dc566-", Namespace:"calico-system", SelfLink:"", UID:"66a18fea-3d72-4ed1-bd9b-d7382885dcbb", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"757c5dc566", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"7adda32655d1bc29df322d4906e8ebf57100651481b3b556d49d1ccc2c5c60d7", Pod:"calico-kube-controllers-757c5dc566-25jwl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali382ea1bc722", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:52.370432 containerd[2067]: 2024-09-04 17:11:52.279 [INFO][5556] k8s.go 608: Cleaning up netns ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:52.370432 containerd[2067]: 2024-09-04 17:11:52.280 [INFO][5556] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" iface="eth0" netns="" Sep 4 17:11:52.370432 containerd[2067]: 2024-09-04 17:11:52.281 [INFO][5556] k8s.go 615: Releasing IP address(es) ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:52.370432 containerd[2067]: 2024-09-04 17:11:52.281 [INFO][5556] utils.go 188: Calico CNI releasing IP address ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:52.370432 containerd[2067]: 2024-09-04 17:11:52.344 [INFO][5562] ipam_plugin.go 417: Releasing address using handleID ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" HandleID="k8s-pod-network.3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Workload="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:52.370432 containerd[2067]: 2024-09-04 17:11:52.344 [INFO][5562] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:52.370432 containerd[2067]: 2024-09-04 17:11:52.344 [INFO][5562] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:52.370432 containerd[2067]: 2024-09-04 17:11:52.359 [WARNING][5562] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" HandleID="k8s-pod-network.3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Workload="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:52.370432 containerd[2067]: 2024-09-04 17:11:52.359 [INFO][5562] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" HandleID="k8s-pod-network.3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Workload="ip--172--31--29--2-k8s-calico--kube--controllers--757c5dc566--25jwl-eth0" Sep 4 17:11:52.370432 containerd[2067]: 2024-09-04 17:11:52.361 [INFO][5562] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:52.370432 containerd[2067]: 2024-09-04 17:11:52.365 [INFO][5556] k8s.go 621: Teardown processing complete. ContainerID="3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13" Sep 4 17:11:52.370432 containerd[2067]: time="2024-09-04T17:11:52.369928721Z" level=info msg="TearDown network for sandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\" successfully" Sep 4 17:11:52.377898 containerd[2067]: time="2024-09-04T17:11:52.376372109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:11:52.377898 containerd[2067]: time="2024-09-04T17:11:52.377451905Z" level=info msg="RemovePodSandbox \"3fdb36d1aa1842fdc545a689269e14706a75b409041dc73af5a0a8ca8d7bea13\" returns successfully" Sep 4 17:11:52.380661 containerd[2067]: time="2024-09-04T17:11:52.380586437Z" level=info msg="StopPodSandbox for \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\"" Sep 4 17:11:52.580589 containerd[2067]: 2024-09-04 17:11:52.479 [WARNING][5580] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d22f7da3-7f61-4333-ae07-e54a750d8f41", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18", Pod:"coredns-5dd5756b68-vz5lc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c49eec6212", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:52.580589 containerd[2067]: 2024-09-04 17:11:52.479 [INFO][5580] k8s.go 608: Cleaning up netns ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:52.580589 containerd[2067]: 2024-09-04 17:11:52.479 [INFO][5580] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" iface="eth0" netns="" Sep 4 17:11:52.580589 containerd[2067]: 2024-09-04 17:11:52.480 [INFO][5580] k8s.go 615: Releasing IP address(es) ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:52.580589 containerd[2067]: 2024-09-04 17:11:52.480 [INFO][5580] utils.go 188: Calico CNI releasing IP address ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:52.580589 containerd[2067]: 2024-09-04 17:11:52.526 [INFO][5586] ipam_plugin.go 417: Releasing address using handleID ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" HandleID="k8s-pod-network.a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:52.580589 containerd[2067]: 2024-09-04 17:11:52.526 [INFO][5586] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:52.580589 containerd[2067]: 2024-09-04 17:11:52.526 [INFO][5586] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:52.580589 containerd[2067]: 2024-09-04 17:11:52.557 [WARNING][5586] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" HandleID="k8s-pod-network.a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:52.580589 containerd[2067]: 2024-09-04 17:11:52.560 [INFO][5586] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" HandleID="k8s-pod-network.a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:52.580589 containerd[2067]: 2024-09-04 17:11:52.567 [INFO][5586] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:52.580589 containerd[2067]: 2024-09-04 17:11:52.572 [INFO][5580] k8s.go 621: Teardown processing complete. ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:52.580589 containerd[2067]: time="2024-09-04T17:11:52.580562238Z" level=info msg="TearDown network for sandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\" successfully" Sep 4 17:11:52.580589 containerd[2067]: time="2024-09-04T17:11:52.580600758Z" level=info msg="StopPodSandbox for \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\" returns successfully" Sep 4 17:11:52.584503 containerd[2067]: time="2024-09-04T17:11:52.582460554Z" level=info msg="RemovePodSandbox for \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\"" Sep 4 17:11:52.584503 containerd[2067]: time="2024-09-04T17:11:52.582798630Z" level=info msg="Forcibly stopping sandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\"" Sep 4 17:11:52.774195 containerd[2067]: 2024-09-04 17:11:52.699 [WARNING][5605] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d22f7da3-7f61-4333-ae07-e54a750d8f41", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"1a2b382b68bb7dcc6d188aa19919cc37bc42f3f0f9aedbe2885a0768c89b1a18", Pod:"coredns-5dd5756b68-vz5lc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c49eec6212", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:52.774195 containerd[2067]: 2024-09-04 17:11:52.700 [INFO][5605] k8s.go 608: Cleaning up netns ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:52.774195 containerd[2067]: 2024-09-04 17:11:52.700 [INFO][5605] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" iface="eth0" netns="" Sep 4 17:11:52.774195 containerd[2067]: 2024-09-04 17:11:52.700 [INFO][5605] k8s.go 615: Releasing IP address(es) ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:52.774195 containerd[2067]: 2024-09-04 17:11:52.700 [INFO][5605] utils.go 188: Calico CNI releasing IP address ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:52.774195 containerd[2067]: 2024-09-04 17:11:52.750 [INFO][5611] ipam_plugin.go 417: Releasing address using handleID ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" HandleID="k8s-pod-network.a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:52.774195 containerd[2067]: 2024-09-04 17:11:52.750 [INFO][5611] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:52.774195 containerd[2067]: 2024-09-04 17:11:52.750 [INFO][5611] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:52.774195 containerd[2067]: 2024-09-04 17:11:52.765 [WARNING][5611] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" HandleID="k8s-pod-network.a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:52.774195 containerd[2067]: 2024-09-04 17:11:52.766 [INFO][5611] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" HandleID="k8s-pod-network.a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--vz5lc-eth0" Sep 4 17:11:52.774195 containerd[2067]: 2024-09-04 17:11:52.768 [INFO][5611] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:52.774195 containerd[2067]: 2024-09-04 17:11:52.771 [INFO][5605] k8s.go 621: Teardown processing complete. ContainerID="a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c" Sep 4 17:11:52.774195 containerd[2067]: time="2024-09-04T17:11:52.774156343Z" level=info msg="TearDown network for sandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\" successfully" Sep 4 17:11:52.778153 containerd[2067]: time="2024-09-04T17:11:52.778082683Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:11:52.778770 containerd[2067]: time="2024-09-04T17:11:52.778191283Z" level=info msg="RemovePodSandbox \"a8ce472e1f8a7f0edb2e1fb29f2e6c9ad93bc1917641e64639cb64d29cc1fb9c\" returns successfully" Sep 4 17:11:52.779555 containerd[2067]: time="2024-09-04T17:11:52.779000167Z" level=info msg="StopPodSandbox for \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\"" Sep 4 17:11:52.908831 containerd[2067]: 2024-09-04 17:11:52.844 [WARNING][5629] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f07346e-efe9-4d91-a84b-47e3d545d647", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd", Pod:"csi-node-driver-lxpqm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali39fe665d9ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:52.908831 containerd[2067]: 2024-09-04 17:11:52.845 [INFO][5629] k8s.go 608: Cleaning up netns ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:52.908831 containerd[2067]: 2024-09-04 17:11:52.845 [INFO][5629] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" iface="eth0" netns="" Sep 4 17:11:52.908831 containerd[2067]: 2024-09-04 17:11:52.845 [INFO][5629] k8s.go 615: Releasing IP address(es) ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:52.908831 containerd[2067]: 2024-09-04 17:11:52.845 [INFO][5629] utils.go 188: Calico CNI releasing IP address ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:52.908831 containerd[2067]: 2024-09-04 17:11:52.885 [INFO][5635] ipam_plugin.go 417: Releasing address using handleID ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" HandleID="k8s-pod-network.b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Workload="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:52.908831 containerd[2067]: 2024-09-04 17:11:52.885 [INFO][5635] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:52.908831 containerd[2067]: 2024-09-04 17:11:52.885 [INFO][5635] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:52.908831 containerd[2067]: 2024-09-04 17:11:52.899 [WARNING][5635] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" HandleID="k8s-pod-network.b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Workload="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:52.908831 containerd[2067]: 2024-09-04 17:11:52.899 [INFO][5635] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" HandleID="k8s-pod-network.b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Workload="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:52.908831 containerd[2067]: 2024-09-04 17:11:52.903 [INFO][5635] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:52.908831 containerd[2067]: 2024-09-04 17:11:52.906 [INFO][5629] k8s.go 621: Teardown processing complete. ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:52.910219 containerd[2067]: time="2024-09-04T17:11:52.908881352Z" level=info msg="TearDown network for sandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\" successfully" Sep 4 17:11:52.910219 containerd[2067]: time="2024-09-04T17:11:52.908962748Z" level=info msg="StopPodSandbox for \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\" returns successfully" Sep 4 17:11:52.910219 containerd[2067]: time="2024-09-04T17:11:52.909529700Z" level=info msg="RemovePodSandbox for \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\"" Sep 4 17:11:52.910219 containerd[2067]: time="2024-09-04T17:11:52.909589196Z" level=info msg="Forcibly stopping sandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\"" Sep 4 17:11:53.039943 containerd[2067]: 2024-09-04 17:11:52.974 [WARNING][5653] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f07346e-efe9-4d91-a84b-47e3d545d647", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"262df2695e2eab01b932233a5dbd317665cb9f425c291082c29a6989cd6257fd", Pod:"csi-node-driver-lxpqm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali39fe665d9ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:53.039943 containerd[2067]: 2024-09-04 17:11:52.975 [INFO][5653] k8s.go 608: Cleaning up netns ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:53.039943 containerd[2067]: 2024-09-04 17:11:52.975 [INFO][5653] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" iface="eth0" netns="" Sep 4 17:11:53.039943 containerd[2067]: 2024-09-04 17:11:52.975 [INFO][5653] k8s.go 615: Releasing IP address(es) ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:53.039943 containerd[2067]: 2024-09-04 17:11:52.975 [INFO][5653] utils.go 188: Calico CNI releasing IP address ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:53.039943 containerd[2067]: 2024-09-04 17:11:53.020 [INFO][5659] ipam_plugin.go 417: Releasing address using handleID ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" HandleID="k8s-pod-network.b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Workload="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:53.039943 containerd[2067]: 2024-09-04 17:11:53.020 [INFO][5659] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:53.039943 containerd[2067]: 2024-09-04 17:11:53.020 [INFO][5659] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:53.039943 containerd[2067]: 2024-09-04 17:11:53.032 [WARNING][5659] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" HandleID="k8s-pod-network.b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Workload="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:53.039943 containerd[2067]: 2024-09-04 17:11:53.032 [INFO][5659] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" HandleID="k8s-pod-network.b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Workload="ip--172--31--29--2-k8s-csi--node--driver--lxpqm-eth0" Sep 4 17:11:53.039943 containerd[2067]: 2024-09-04 17:11:53.034 [INFO][5659] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:53.039943 containerd[2067]: 2024-09-04 17:11:53.036 [INFO][5653] k8s.go 621: Teardown processing complete. ContainerID="b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12" Sep 4 17:11:53.041964 containerd[2067]: time="2024-09-04T17:11:53.039907840Z" level=info msg="TearDown network for sandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\" successfully" Sep 4 17:11:53.045297 containerd[2067]: time="2024-09-04T17:11:53.045144196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:11:53.045591 containerd[2067]: time="2024-09-04T17:11:53.045471700Z" level=info msg="RemovePodSandbox \"b752ae60ef80d7aaad64483c58ecf1a57c36664eb990d08d427286484932ce12\" returns successfully" Sep 4 17:11:53.046682 containerd[2067]: time="2024-09-04T17:11:53.046588612Z" level=info msg="StopPodSandbox for \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\"" Sep 4 17:11:53.163539 containerd[2067]: 2024-09-04 17:11:53.105 [WARNING][5677] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d6f19b88-2c58-4fc4-8440-e3af1c017a65", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41", Pod:"coredns-5dd5756b68-xt7z7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0090d494f70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:53.163539 containerd[2067]: 2024-09-04 17:11:53.105 [INFO][5677] k8s.go 608: Cleaning up netns ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:53.163539 containerd[2067]: 2024-09-04 17:11:53.105 [INFO][5677] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" iface="eth0" netns="" Sep 4 17:11:53.163539 containerd[2067]: 2024-09-04 17:11:53.105 [INFO][5677] k8s.go 615: Releasing IP address(es) ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:53.163539 containerd[2067]: 2024-09-04 17:11:53.105 [INFO][5677] utils.go 188: Calico CNI releasing IP address ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:53.163539 containerd[2067]: 2024-09-04 17:11:53.142 [INFO][5683] ipam_plugin.go 417: Releasing address using handleID ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" HandleID="k8s-pod-network.338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:53.163539 containerd[2067]: 2024-09-04 17:11:53.142 [INFO][5683] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:53.163539 containerd[2067]: 2024-09-04 17:11:53.142 [INFO][5683] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:53.163539 containerd[2067]: 2024-09-04 17:11:53.154 [WARNING][5683] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" HandleID="k8s-pod-network.338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:53.163539 containerd[2067]: 2024-09-04 17:11:53.154 [INFO][5683] ipam_plugin.go 445: Releasing address using workloadID ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" HandleID="k8s-pod-network.338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:53.163539 containerd[2067]: 2024-09-04 17:11:53.158 [INFO][5683] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:53.163539 containerd[2067]: 2024-09-04 17:11:53.161 [INFO][5677] k8s.go 621: Teardown processing complete. ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:53.165033 containerd[2067]: time="2024-09-04T17:11:53.163578701Z" level=info msg="TearDown network for sandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\" successfully" Sep 4 17:11:53.165033 containerd[2067]: time="2024-09-04T17:11:53.163617113Z" level=info msg="StopPodSandbox for \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\" returns successfully" Sep 4 17:11:53.165033 containerd[2067]: time="2024-09-04T17:11:53.164604413Z" level=info msg="RemovePodSandbox for \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\"" Sep 4 17:11:53.165033 containerd[2067]: time="2024-09-04T17:11:53.164754557Z" level=info msg="Forcibly stopping sandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\"" Sep 4 17:11:53.288439 containerd[2067]: 2024-09-04 17:11:53.226 [WARNING][5702] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d6f19b88-2c58-4fc4-8440-e3af1c017a65", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"2874f0b1af2ca6f48f4e6d96de1ae6ef8e9db57de6f2b06bc2eec4b40f1a5c41", Pod:"coredns-5dd5756b68-xt7z7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0090d494f70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:53.288439 containerd[2067]: 2024-09-04 17:11:53.226 [INFO][5702] k8s.go 608: Cleaning up netns ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:53.288439 containerd[2067]: 2024-09-04 17:11:53.227 [INFO][5702] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" iface="eth0" netns="" Sep 4 17:11:53.288439 containerd[2067]: 2024-09-04 17:11:53.227 [INFO][5702] k8s.go 615: Releasing IP address(es) ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:53.288439 containerd[2067]: 2024-09-04 17:11:53.227 [INFO][5702] utils.go 188: Calico CNI releasing IP address ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:53.288439 containerd[2067]: 2024-09-04 17:11:53.266 [INFO][5708] ipam_plugin.go 417: Releasing address using handleID ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" HandleID="k8s-pod-network.338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:53.288439 containerd[2067]: 2024-09-04 17:11:53.267 [INFO][5708] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:53.288439 containerd[2067]: 2024-09-04 17:11:53.267 [INFO][5708] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:53.288439 containerd[2067]: 2024-09-04 17:11:53.281 [WARNING][5708] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" HandleID="k8s-pod-network.338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:53.288439 containerd[2067]: 2024-09-04 17:11:53.281 [INFO][5708] ipam_plugin.go 445: Releasing address using workloadID ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" HandleID="k8s-pod-network.338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Workload="ip--172--31--29--2-k8s-coredns--5dd5756b68--xt7z7-eth0" Sep 4 17:11:53.288439 containerd[2067]: 2024-09-04 17:11:53.283 [INFO][5708] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:53.288439 containerd[2067]: 2024-09-04 17:11:53.286 [INFO][5702] k8s.go 621: Teardown processing complete. ContainerID="338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e" Sep 4 17:11:53.289680 containerd[2067]: time="2024-09-04T17:11:53.288470430Z" level=info msg="TearDown network for sandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\" successfully" Sep 4 17:11:53.292978 containerd[2067]: time="2024-09-04T17:11:53.292768110Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:11:53.292978 containerd[2067]: time="2024-09-04T17:11:53.292877850Z" level=info msg="RemovePodSandbox \"338b1fe1e502bbc6cb35199a7e04056e082028470da9031d61d968f08e65ae0e\" returns successfully" Sep 4 17:11:56.998803 systemd[1]: Started sshd@12-172.31.29.2:22-139.178.89.65:33296.service - OpenSSH per-connection server daemon (139.178.89.65:33296). Sep 4 17:11:57.182888 sshd[5740]: Accepted publickey for core from 139.178.89.65 port 33296 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:57.185535 sshd[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:57.194111 systemd-logind[2036]: New session 13 of user core. Sep 4 17:11:57.200702 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:11:57.456549 sshd[5740]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:57.464355 systemd-logind[2036]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:11:57.465667 systemd[1]: sshd@12-172.31.29.2:22-139.178.89.65:33296.service: Deactivated successfully. Sep 4 17:11:57.472957 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:11:57.475198 systemd-logind[2036]: Removed session 13. Sep 4 17:12:02.487170 systemd[1]: Started sshd@13-172.31.29.2:22-139.178.89.65:48166.service - OpenSSH per-connection server daemon (139.178.89.65:48166). Sep 4 17:12:02.668208 sshd[5765]: Accepted publickey for core from 139.178.89.65 port 48166 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:02.671194 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:02.685721 systemd-logind[2036]: New session 14 of user core. Sep 4 17:12:02.694355 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:12:02.956414 sshd[5765]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:02.964689 systemd[1]: sshd@13-172.31.29.2:22-139.178.89.65:48166.service: Deactivated successfully. Sep 4 17:12:02.970727 systemd-logind[2036]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:12:02.971310 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:12:02.976447 systemd-logind[2036]: Removed session 14. Sep 4 17:12:07.987752 systemd[1]: Started sshd@14-172.31.29.2:22-139.178.89.65:53464.service - OpenSSH per-connection server daemon (139.178.89.65:53464). Sep 4 17:12:08.167117 sshd[5784]: Accepted publickey for core from 139.178.89.65 port 53464 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:08.170313 sshd[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:08.178774 systemd-logind[2036]: New session 15 of user core. Sep 4 17:12:08.188923 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:12:08.428977 sshd[5784]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:08.435239 systemd[1]: sshd@14-172.31.29.2:22-139.178.89.65:53464.service: Deactivated successfully. Sep 4 17:12:08.443240 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:12:08.445911 systemd-logind[2036]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:12:08.447794 systemd-logind[2036]: Removed session 15. Sep 4 17:12:13.458795 systemd[1]: Started sshd@15-172.31.29.2:22-139.178.89.65:53468.service - OpenSSH per-connection server daemon (139.178.89.65:53468). Sep 4 17:12:13.637890 sshd[5822]: Accepted publickey for core from 139.178.89.65 port 53468 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:13.640537 sshd[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:13.650043 systemd-logind[2036]: New session 16 of user core. Sep 4 17:12:13.656384 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:12:13.892711 sshd[5822]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:13.898656 systemd[1]: sshd@15-172.31.29.2:22-139.178.89.65:53468.service: Deactivated successfully. Sep 4 17:12:13.907223 systemd-logind[2036]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:12:13.909125 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:12:13.912749 systemd-logind[2036]: Removed session 16. Sep 4 17:12:13.929790 systemd[1]: Started sshd@16-172.31.29.2:22-139.178.89.65:53472.service - OpenSSH per-connection server daemon (139.178.89.65:53472). Sep 4 17:12:14.112210 sshd[5835]: Accepted publickey for core from 139.178.89.65 port 53472 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:14.114911 sshd[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:14.124179 systemd-logind[2036]: New session 17 of user core. Sep 4 17:12:14.131740 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:12:14.554678 sshd[5835]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:14.563110 systemd[1]: sshd@16-172.31.29.2:22-139.178.89.65:53472.service: Deactivated successfully. Sep 4 17:12:14.570288 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:12:14.570406 systemd-logind[2036]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:12:14.573647 systemd-logind[2036]: Removed session 17. Sep 4 17:12:14.587873 systemd[1]: Started sshd@17-172.31.29.2:22-139.178.89.65:53476.service - OpenSSH per-connection server daemon (139.178.89.65:53476). Sep 4 17:12:14.760871 sshd[5847]: Accepted publickey for core from 139.178.89.65 port 53476 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:14.764700 sshd[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:14.772439 systemd-logind[2036]: New session 18 of user core. Sep 4 17:12:14.780865 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:12:16.503474 sshd[5847]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:16.516128 systemd[1]: sshd@17-172.31.29.2:22-139.178.89.65:53476.service: Deactivated successfully. Sep 4 17:12:16.538346 systemd-logind[2036]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:12:16.540768 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:12:16.565089 systemd[1]: Started sshd@18-172.31.29.2:22-139.178.89.65:53480.service - OpenSSH per-connection server daemon (139.178.89.65:53480). Sep 4 17:12:16.571053 systemd-logind[2036]: Removed session 18. Sep 4 17:12:16.774443 sshd[5889]: Accepted publickey for core from 139.178.89.65 port 53480 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:16.778935 sshd[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:16.798778 systemd-logind[2036]: New session 19 of user core. Sep 4 17:12:16.805850 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:12:17.883394 sshd[5889]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:17.891149 systemd-logind[2036]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:12:17.896522 systemd[1]: sshd@18-172.31.29.2:22-139.178.89.65:53480.service: Deactivated successfully. Sep 4 17:12:17.913949 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:12:17.939982 systemd[1]: Started sshd@19-172.31.29.2:22-139.178.89.65:60626.service - OpenSSH per-connection server daemon (139.178.89.65:60626). Sep 4 17:12:17.945811 systemd-logind[2036]: Removed session 19. Sep 4 17:12:18.123697 sshd[5904]: Accepted publickey for core from 139.178.89.65 port 60626 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:18.128374 sshd[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:18.145622 systemd-logind[2036]: New session 20 of user core. Sep 4 17:12:18.153780 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:12:18.564952 sshd[5904]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:18.586726 systemd[1]: sshd@19-172.31.29.2:22-139.178.89.65:60626.service: Deactivated successfully. Sep 4 17:12:18.602659 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:12:18.606565 systemd-logind[2036]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:12:18.611900 systemd-logind[2036]: Removed session 20. Sep 4 17:12:19.069297 kubelet[3531]: I0904 17:12:19.066131 3531 topology_manager.go:215] "Topology Admit Handler" podUID="46fb005e-8096-46b6-b607-ab6b5d69bf9f" podNamespace="calico-apiserver" podName="calico-apiserver-777df6d875-6k7f2" Sep 4 17:12:19.257464 kubelet[3531]: I0904 17:12:19.256524 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/46fb005e-8096-46b6-b607-ab6b5d69bf9f-calico-apiserver-certs\") pod \"calico-apiserver-777df6d875-6k7f2\" (UID: \"46fb005e-8096-46b6-b607-ab6b5d69bf9f\") " pod="calico-apiserver/calico-apiserver-777df6d875-6k7f2" Sep 4 17:12:19.258372 kubelet[3531]: I0904 17:12:19.257793 3531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7rff\" (UniqueName: \"kubernetes.io/projected/46fb005e-8096-46b6-b607-ab6b5d69bf9f-kube-api-access-x7rff\") pod \"calico-apiserver-777df6d875-6k7f2\" (UID: \"46fb005e-8096-46b6-b607-ab6b5d69bf9f\") " pod="calico-apiserver/calico-apiserver-777df6d875-6k7f2" Sep 4 17:12:19.395401 containerd[2067]: time="2024-09-04T17:12:19.393424399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777df6d875-6k7f2,Uid:46fb005e-8096-46b6-b607-ab6b5d69bf9f,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:12:19.657198 systemd-networkd[1610]: cali7a25fbc4d09: Link UP Sep 4 17:12:19.659671 systemd-networkd[1610]: cali7a25fbc4d09: Gained carrier Sep 4 17:12:19.667888 (udev-worker)[5948]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.529 [INFO][5930] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0 calico-apiserver-777df6d875- calico-apiserver 46fb005e-8096-46b6-b607-ab6b5d69bf9f 1068 0 2024-09-04 17:12:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:777df6d875 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-2 calico-apiserver-777df6d875-6k7f2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7a25fbc4d09 [] []}} ContainerID="779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" Namespace="calico-apiserver" Pod="calico-apiserver-777df6d875-6k7f2" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-" Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.530 [INFO][5930] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" Namespace="calico-apiserver" Pod="calico-apiserver-777df6d875-6k7f2" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0" Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.584 [INFO][5941] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" HandleID="k8s-pod-network.779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" Workload="ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0" Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.602 [INFO][5941] ipam_plugin.go 270: Auto assigning IP ContainerID="779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" HandleID="k8s-pod-network.779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" Workload="ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d5a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-2", "pod":"calico-apiserver-777df6d875-6k7f2", "timestamp":"2024-09-04 17:12:19.584407436 +0000 UTC"}, Hostname:"ip-172-31-29-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.602 [INFO][5941] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.603 [INFO][5941] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.603 [INFO][5941] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-2' Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.607 [INFO][5941] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" host="ip-172-31-29-2" Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.614 [INFO][5941] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-2" Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.621 [INFO][5941] ipam.go 489: Trying affinity for 192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.624 [INFO][5941] ipam.go 155: Attempting to load block cidr=192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.628 [INFO][5941] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.0/26 host="ip-172-31-29-2" Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.628 [INFO][5941] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.0/26 handle="k8s-pod-network.779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" host="ip-172-31-29-2" Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.630 [INFO][5941] ipam.go 1685: Creating new handle: k8s-pod-network.779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54 Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.636 [INFO][5941] ipam.go 1203: Writing block in order to claim IPs block=192.168.96.0/26 handle="k8s-pod-network.779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" host="ip-172-31-29-2" Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.644 [INFO][5941] ipam.go 1216: Successfully claimed IPs: [192.168.96.5/26] block=192.168.96.0/26 handle="k8s-pod-network.779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" host="ip-172-31-29-2" Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.644 [INFO][5941] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.5/26] handle="k8s-pod-network.779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" host="ip-172-31-29-2" Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.644 [INFO][5941] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:19.703213 containerd[2067]: 2024-09-04 17:12:19.644 [INFO][5941] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.96.5/26] IPv6=[] ContainerID="779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" HandleID="k8s-pod-network.779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" Workload="ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0" Sep 4 17:12:19.704488 containerd[2067]: 2024-09-04 17:12:19.648 [INFO][5930] k8s.go 386: Populated endpoint ContainerID="779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" Namespace="calico-apiserver" Pod="calico-apiserver-777df6d875-6k7f2" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0", GenerateName:"calico-apiserver-777df6d875-", Namespace:"calico-apiserver", SelfLink:"", UID:"46fb005e-8096-46b6-b607-ab6b5d69bf9f", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777df6d875", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"", Pod:"calico-apiserver-777df6d875-6k7f2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a25fbc4d09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:19.704488 containerd[2067]: 2024-09-04 17:12:19.648 [INFO][5930] k8s.go 387: Calico CNI using IPs: [192.168.96.5/32] ContainerID="779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" Namespace="calico-apiserver" Pod="calico-apiserver-777df6d875-6k7f2" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0" Sep 4 17:12:19.704488 containerd[2067]: 2024-09-04 17:12:19.648 [INFO][5930] dataplane_linux.go 68: Setting the host side veth name to cali7a25fbc4d09 ContainerID="779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" Namespace="calico-apiserver" Pod="calico-apiserver-777df6d875-6k7f2" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0" Sep 4 17:12:19.704488 containerd[2067]: 2024-09-04 17:12:19.663 [INFO][5930] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" Namespace="calico-apiserver" Pod="calico-apiserver-777df6d875-6k7f2" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0" Sep 4 17:12:19.704488 containerd[2067]: 2024-09-04 17:12:19.665 [INFO][5930] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" Namespace="calico-apiserver" Pod="calico-apiserver-777df6d875-6k7f2" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0", GenerateName:"calico-apiserver-777df6d875-", Namespace:"calico-apiserver", SelfLink:"", UID:"46fb005e-8096-46b6-b607-ab6b5d69bf9f", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777df6d875", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-2", ContainerID:"779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54", Pod:"calico-apiserver-777df6d875-6k7f2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a25fbc4d09", MAC:"de:a6:24:65:50:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:19.704488 containerd[2067]: 2024-09-04 17:12:19.690 [INFO][5930] k8s.go 500: Wrote updated endpoint to datastore ContainerID="779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54" Namespace="calico-apiserver" Pod="calico-apiserver-777df6d875-6k7f2" WorkloadEndpoint="ip--172--31--29--2-k8s-calico--apiserver--777df6d875--6k7f2-eth0" Sep 4 17:12:19.787309 containerd[2067]: time="2024-09-04T17:12:19.785426841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:19.787309 containerd[2067]: time="2024-09-04T17:12:19.785548245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:19.787309 containerd[2067]: time="2024-09-04T17:12:19.785603313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:19.787309 containerd[2067]: time="2024-09-04T17:12:19.785632821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:19.899378 containerd[2067]: time="2024-09-04T17:12:19.898105594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777df6d875-6k7f2,Uid:46fb005e-8096-46b6-b607-ab6b5d69bf9f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54\"" Sep 4 17:12:19.901564 containerd[2067]: time="2024-09-04T17:12:19.901445026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:12:20.862887 systemd-networkd[1610]: cali7a25fbc4d09: Gained IPv6LL Sep 4 17:12:22.447149 containerd[2067]: time="2024-09-04T17:12:22.446991646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:22.450075 containerd[2067]: time="2024-09-04T17:12:22.450010031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Sep 4 17:12:22.451805 containerd[2067]: time="2024-09-04T17:12:22.451469087Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:22.458201 containerd[2067]: time="2024-09-04T17:12:22.458093591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:22.460289 containerd[2067]: time="2024-09-04T17:12:22.459738671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 2.558225353s" Sep 4 17:12:22.460289 containerd[2067]: time="2024-09-04T17:12:22.459798635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Sep 4 17:12:22.466448 containerd[2067]: time="2024-09-04T17:12:22.466326455Z" level=info msg="CreateContainer within sandbox \"779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:12:22.484657 containerd[2067]: time="2024-09-04T17:12:22.484600751Z" level=info msg="CreateContainer within sandbox \"779c0b5c2f7882792f5b60ab054722c5ca370ff2c9e3086331a2627c62773b54\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6eb5e58e9744d02f034e4b6a1cec30ee6681fde987b749c5418bb739a2f0e4dc\"" Sep 4 17:12:22.487290 containerd[2067]: time="2024-09-04T17:12:22.485694911Z" level=info msg="StartContainer for \"6eb5e58e9744d02f034e4b6a1cec30ee6681fde987b749c5418bb739a2f0e4dc\"" Sep 4 17:12:22.674774 containerd[2067]: time="2024-09-04T17:12:22.674212800Z" level=info msg="StartContainer for \"6eb5e58e9744d02f034e4b6a1cec30ee6681fde987b749c5418bb739a2f0e4dc\" returns successfully" Sep 4 17:12:22.942768 ntpd[2018]: Listen normally on 12 cali7a25fbc4d09 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:12:22.943769 ntpd[2018]: 4 Sep 17:12:22 ntpd[2018]: Listen normally on 12 cali7a25fbc4d09 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:12:23.606809 systemd[1]: Started sshd@20-172.31.29.2:22-139.178.89.65:60640.service - OpenSSH per-connection server daemon (139.178.89.65:60640). Sep 4 17:12:23.655286 kubelet[3531]: I0904 17:12:23.655215 3531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-777df6d875-6k7f2" podStartSLOduration=2.095208127 podCreationTimestamp="2024-09-04 17:12:19 +0000 UTC" firstStartedPulling="2024-09-04 17:12:19.900241738 +0000 UTC m=+88.212433363" lastFinishedPulling="2024-09-04 17:12:22.460190219 +0000 UTC m=+90.772381832" observedRunningTime="2024-09-04 17:12:23.651721224 +0000 UTC m=+91.963912849" watchObservedRunningTime="2024-09-04 17:12:23.655156596 +0000 UTC m=+91.967348245" Sep 4 17:12:23.803838 sshd[6054]: Accepted publickey for core from 139.178.89.65 port 60640 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:23.807097 sshd[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:23.815652 systemd-logind[2036]: New session 21 of user core. Sep 4 17:12:23.828777 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:12:24.128559 sshd[6054]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:24.142894 systemd[1]: sshd@20-172.31.29.2:22-139.178.89.65:60640.service: Deactivated successfully. Sep 4 17:12:24.159234 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:12:24.164818 systemd-logind[2036]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:12:24.168997 systemd-logind[2036]: Removed session 21. Sep 4 17:12:26.962480 systemd[1]: run-containerd-runc-k8s.io-fa5267829b93b168c0320dee757fff0152f5c094fa9a7016b64d55f9c8a30ffe-runc.kMY7ne.mount: Deactivated successfully. Sep 4 17:12:29.165065 systemd[1]: Started sshd@21-172.31.29.2:22-139.178.89.65:50070.service - OpenSSH per-connection server daemon (139.178.89.65:50070). Sep 4 17:12:29.344385 sshd[6097]: Accepted publickey for core from 139.178.89.65 port 50070 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:29.347394 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:29.355695 systemd-logind[2036]: New session 22 of user core. Sep 4 17:12:29.360755 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:12:29.614605 sshd[6097]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:29.620608 systemd[1]: sshd@21-172.31.29.2:22-139.178.89.65:50070.service: Deactivated successfully. Sep 4 17:12:29.620875 systemd-logind[2036]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:12:29.628047 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:12:29.631358 systemd-logind[2036]: Removed session 22. Sep 4 17:12:34.647750 systemd[1]: Started sshd@22-172.31.29.2:22-139.178.89.65:50078.service - OpenSSH per-connection server daemon (139.178.89.65:50078). Sep 4 17:12:34.836878 sshd[6119]: Accepted publickey for core from 139.178.89.65 port 50078 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:34.839506 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:34.847362 systemd-logind[2036]: New session 23 of user core. Sep 4 17:12:34.853848 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:12:35.135521 sshd[6119]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:35.141541 systemd-logind[2036]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:12:35.142688 systemd[1]: sshd@22-172.31.29.2:22-139.178.89.65:50078.service: Deactivated successfully. Sep 4 17:12:35.149896 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:12:35.154697 systemd-logind[2036]: Removed session 23. Sep 4 17:12:40.165832 systemd[1]: Started sshd@23-172.31.29.2:22-139.178.89.65:53192.service - OpenSSH per-connection server daemon (139.178.89.65:53192). Sep 4 17:12:40.347789 sshd[6137]: Accepted publickey for core from 139.178.89.65 port 53192 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:40.350584 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:40.360614 systemd-logind[2036]: New session 24 of user core. Sep 4 17:12:40.369383 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:12:40.603624 sshd[6137]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:40.610712 systemd[1]: sshd@23-172.31.29.2:22-139.178.89.65:53192.service: Deactivated successfully. Sep 4 17:12:40.619410 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:12:40.620078 systemd-logind[2036]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:12:40.623185 systemd-logind[2036]: Removed session 24. Sep 4 17:12:45.637746 systemd[1]: Started sshd@24-172.31.29.2:22-139.178.89.65:53194.service - OpenSSH per-connection server daemon (139.178.89.65:53194). Sep 4 17:12:45.830720 sshd[6179]: Accepted publickey for core from 139.178.89.65 port 53194 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:45.834136 sshd[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:45.841808 systemd-logind[2036]: New session 25 of user core. Sep 4 17:12:45.853868 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:12:46.119770 sshd[6179]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:46.129907 systemd[1]: sshd@24-172.31.29.2:22-139.178.89.65:53194.service: Deactivated successfully. Sep 4 17:12:46.140230 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:12:46.142881 systemd-logind[2036]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:12:46.147032 systemd-logind[2036]: Removed session 25. Sep 4 17:12:51.150741 systemd[1]: Started sshd@25-172.31.29.2:22-139.178.89.65:40508.service - OpenSSH per-connection server daemon (139.178.89.65:40508). Sep 4 17:12:51.335501 sshd[6193]: Accepted publickey for core from 139.178.89.65 port 40508 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:51.338210 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:51.346191 systemd-logind[2036]: New session 26 of user core. Sep 4 17:12:51.352836 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:12:51.594511 sshd[6193]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:51.600203 systemd[1]: sshd@25-172.31.29.2:22-139.178.89.65:40508.service: Deactivated successfully. Sep 4 17:12:51.601928 systemd-logind[2036]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:12:51.611526 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:12:51.613192 systemd-logind[2036]: Removed session 26. Sep 4 17:13:04.845941 kubelet[3531]: E0904 17:13:04.845433 3531 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 4 17:13:05.202499 containerd[2067]: time="2024-09-04T17:13:05.202299927Z" level=info msg="shim disconnected" id=2a23235cc72fe9b3db8fa9d790c04cef6c46c53ddea6807999e291f38cff2588 namespace=k8s.io Sep 4 17:13:05.202499 containerd[2067]: time="2024-09-04T17:13:05.202388163Z" level=warning msg="cleaning up after shim disconnected" id=2a23235cc72fe9b3db8fa9d790c04cef6c46c53ddea6807999e291f38cff2588 namespace=k8s.io Sep 4 17:13:05.202499 containerd[2067]: time="2024-09-04T17:13:05.202412391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:05.209533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a23235cc72fe9b3db8fa9d790c04cef6c46c53ddea6807999e291f38cff2588-rootfs.mount: Deactivated successfully. Sep 4 17:13:05.748761 kubelet[3531]: I0904 17:13:05.748704 3531 scope.go:117] "RemoveContainer" containerID="2a23235cc72fe9b3db8fa9d790c04cef6c46c53ddea6807999e291f38cff2588" Sep 4 17:13:05.753306 containerd[2067]: time="2024-09-04T17:13:05.753018162Z" level=info msg="CreateContainer within sandbox \"576a3606c152e43508f2c3bbeaef85fc9048b04192b523a72b838bd8e0ab0ae9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 17:13:05.774595 containerd[2067]: time="2024-09-04T17:13:05.774526842Z" level=info msg="CreateContainer within sandbox \"576a3606c152e43508f2c3bbeaef85fc9048b04192b523a72b838bd8e0ab0ae9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e48153bea02765fc5000d473f577e9947abcaaf33ca286802b657e237fe2f1f8\"" Sep 4 17:13:05.775407 containerd[2067]: time="2024-09-04T17:13:05.775348866Z" level=info msg="StartContainer for \"e48153bea02765fc5000d473f577e9947abcaaf33ca286802b657e237fe2f1f8\"" Sep 4 17:13:05.896993 containerd[2067]: time="2024-09-04T17:13:05.896923290Z" level=info msg="StartContainer for \"e48153bea02765fc5000d473f577e9947abcaaf33ca286802b657e237fe2f1f8\" returns successfully" Sep 4 17:13:06.189085 containerd[2067]: time="2024-09-04T17:13:06.188450356Z" level=info msg="shim disconnected" id=6836f130ca7819b716ad5fccd3b6f3c749fe9784b63aaf3282723a91a1e8a082 namespace=k8s.io Sep 4 17:13:06.189378 containerd[2067]: time="2024-09-04T17:13:06.189183448Z" level=warning msg="cleaning up after shim disconnected" id=6836f130ca7819b716ad5fccd3b6f3c749fe9784b63aaf3282723a91a1e8a082 namespace=k8s.io Sep 4 17:13:06.189378 containerd[2067]: time="2024-09-04T17:13:06.189207304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:06.214991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6836f130ca7819b716ad5fccd3b6f3c749fe9784b63aaf3282723a91a1e8a082-rootfs.mount: Deactivated successfully. Sep 4 17:13:06.759774 kubelet[3531]: I0904 17:13:06.759727 3531 scope.go:117] "RemoveContainer" containerID="6836f130ca7819b716ad5fccd3b6f3c749fe9784b63aaf3282723a91a1e8a082" Sep 4 17:13:06.763080 containerd[2067]: time="2024-09-04T17:13:06.763016911Z" level=info msg="CreateContainer within sandbox \"9b9089b7b681fb79abbf90f4dbc49fd00ae6f6e6b2f0daab55fd14e6ecf7d53b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 4 17:13:06.801386 containerd[2067]: time="2024-09-04T17:13:06.797823091Z" level=info msg="CreateContainer within sandbox \"9b9089b7b681fb79abbf90f4dbc49fd00ae6f6e6b2f0daab55fd14e6ecf7d53b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"347e03da5c09762f100b07fc80bb6d42573b4d5f7487dec945a1f1fe0e660b20\"" Sep 4 17:13:06.801386 containerd[2067]: time="2024-09-04T17:13:06.798525727Z" level=info msg="StartContainer for \"347e03da5c09762f100b07fc80bb6d42573b4d5f7487dec945a1f1fe0e660b20\"" Sep 4 17:13:06.901357 systemd[1]: run-containerd-runc-k8s.io-347e03da5c09762f100b07fc80bb6d42573b4d5f7487dec945a1f1fe0e660b20-runc.L1QCUh.mount: Deactivated successfully. Sep 4 17:13:06.981282 containerd[2067]: time="2024-09-04T17:13:06.979921928Z" level=info msg="StartContainer for \"347e03da5c09762f100b07fc80bb6d42573b4d5f7487dec945a1f1fe0e660b20\" returns successfully" Sep 4 17:13:11.773578 containerd[2067]: time="2024-09-04T17:13:11.773027147Z" level=info msg="shim disconnected" id=618bbf52731944b5f5aeca81b9b90c89dfe157916171cd9064e884ff2374bb3b namespace=k8s.io Sep 4 17:13:11.773578 containerd[2067]: time="2024-09-04T17:13:11.773250731Z" level=warning msg="cleaning up after shim disconnected" id=618bbf52731944b5f5aeca81b9b90c89dfe157916171cd9064e884ff2374bb3b namespace=k8s.io Sep 4 17:13:11.773578 containerd[2067]: time="2024-09-04T17:13:11.773313395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:11.780544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-618bbf52731944b5f5aeca81b9b90c89dfe157916171cd9064e884ff2374bb3b-rootfs.mount: Deactivated successfully. Sep 4 17:13:12.784314 kubelet[3531]: I0904 17:13:12.784201 3531 scope.go:117] "RemoveContainer" containerID="618bbf52731944b5f5aeca81b9b90c89dfe157916171cd9064e884ff2374bb3b" Sep 4 17:13:12.788585 containerd[2067]: time="2024-09-04T17:13:12.788527945Z" level=info msg="CreateContainer within sandbox \"30974b339cd23c86f70b299879676055b90ca6f0a8f96abe74a9776cb1f0c969\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 17:13:12.808360 containerd[2067]: time="2024-09-04T17:13:12.808245013Z" level=info msg="CreateContainer within sandbox \"30974b339cd23c86f70b299879676055b90ca6f0a8f96abe74a9776cb1f0c969\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2a9347fb54b65a6b370a743e7fa472602becb1bb490fd20f8a752754e00502e8\"" Sep 4 17:13:12.810300 containerd[2067]: time="2024-09-04T17:13:12.810048673Z" level=info msg="StartContainer for \"2a9347fb54b65a6b370a743e7fa472602becb1bb490fd20f8a752754e00502e8\"" Sep 4 17:13:12.933153 containerd[2067]: time="2024-09-04T17:13:12.933073417Z" level=info msg="StartContainer for \"2a9347fb54b65a6b370a743e7fa472602becb1bb490fd20f8a752754e00502e8\" returns successfully" Sep 4 17:13:14.846482 kubelet[3531]: E0904 17:13:14.846415 3531 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 4 17:13:24.848229 kubelet[3531]: E0904 17:13:24.848164 3531 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-2?timeout=10s\": context deadline exceeded"