Jul 2 08:58:02.183675 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 2 08:58:02.183724 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 08:58:02.183752 kernel: KASLR disabled due to lack of seed Jul 2 08:58:02.183769 kernel: efi: EFI v2.7 by EDK II Jul 2 08:58:02.183787 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Jul 2 08:58:02.183803 kernel: ACPI: Early table checksum verification disabled Jul 2 08:58:02.183822 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 2 08:58:02.183839 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 2 08:58:02.183856 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 08:58:02.183873 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 2 08:58:02.183896 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 08:58:02.183913 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 2 08:58:02.183929 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 2 08:58:02.183945 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 2 08:58:02.183964 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 08:58:02.183986 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 2 08:58:02.184004 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 2 08:58:02.184021 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 2 08:58:02.184038 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 2 08:58:02.184055 kernel: printk: bootconsole [uart0] enabled Jul 2 08:58:02.184071 kernel: NUMA: Failed to initialise from firmware Jul 2 08:58:02.184088 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 08:58:02.184105 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 2 08:58:02.184121 kernel: Zone ranges: Jul 2 08:58:02.184138 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 2 08:58:02.184154 kernel: DMA32 empty Jul 2 08:58:02.184175 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 2 08:58:02.184193 kernel: Movable zone start for each node Jul 2 08:58:02.184209 kernel: Early memory node ranges Jul 2 08:58:02.184226 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 2 08:58:02.184243 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 2 08:58:02.184260 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 2 08:58:02.184276 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 2 08:58:02.184293 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 2 08:58:02.184310 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 2 08:58:02.184329 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 2 08:58:02.184345 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 2 08:58:02.184362 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 08:58:02.184383 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 2 08:58:02.184401 kernel: psci: probing for conduit method from ACPI. Jul 2 08:58:02.187495 kernel: psci: PSCIv1.0 detected in firmware. Jul 2 08:58:02.187533 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 08:58:02.187552 kernel: psci: Trusted OS migration not required Jul 2 08:58:02.187581 kernel: psci: SMC Calling Convention v1.1 Jul 2 08:58:02.187600 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 08:58:02.187618 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 08:58:02.187637 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 08:58:02.187655 kernel: Detected PIPT I-cache on CPU0 Jul 2 08:58:02.187673 kernel: CPU features: detected: GIC system register CPU interface Jul 2 08:58:02.187691 kernel: CPU features: detected: Spectre-v2 Jul 2 08:58:02.187708 kernel: CPU features: detected: Spectre-v3a Jul 2 08:58:02.187728 kernel: CPU features: detected: Spectre-BHB Jul 2 08:58:02.187746 kernel: CPU features: detected: ARM erratum 1742098 Jul 2 08:58:02.187766 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 2 08:58:02.187789 kernel: alternatives: applying boot alternatives Jul 2 08:58:02.187810 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 08:58:02.187830 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:58:02.187848 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 08:58:02.187866 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:58:02.187884 kernel: Fallback order for Node 0: 0 Jul 2 08:58:02.187902 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 2 08:58:02.187920 kernel: Policy zone: Normal Jul 2 08:58:02.187937 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:58:02.187955 kernel: software IO TLB: area num 2. Jul 2 08:58:02.187973 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 2 08:58:02.187998 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Jul 2 08:58:02.188016 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:58:02.188033 kernel: trace event string verifier disabled Jul 2 08:58:02.188051 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 08:58:02.188069 kernel: rcu: RCU event tracing is enabled. Jul 2 08:58:02.188087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:58:02.188105 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 08:58:02.188123 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:58:02.188141 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:58:02.188159 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:58:02.188176 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 08:58:02.188199 kernel: GICv3: 96 SPIs implemented Jul 2 08:58:02.188217 kernel: GICv3: 0 Extended SPIs implemented Jul 2 08:58:02.188235 kernel: Root IRQ handler: gic_handle_irq Jul 2 08:58:02.188253 kernel: GICv3: GICv3 features: 16 PPIs Jul 2 08:58:02.188270 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 2 08:58:02.188288 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 2 08:58:02.188307 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 08:58:02.188325 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jul 2 08:58:02.188343 kernel: GICv3: using LPI property table @0x00000004000e0000 Jul 2 08:58:02.188361 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 2 08:58:02.188379 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jul 2 08:58:02.188397 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 08:58:02.188442 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 2 08:58:02.190504 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 2 08:58:02.190526 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 2 08:58:02.190545 kernel: Console: colour dummy device 80x25 Jul 2 08:58:02.190564 kernel: printk: console [tty1] enabled Jul 2 08:58:02.190582 kernel: ACPI: Core revision 20230628 Jul 2 08:58:02.190601 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 2 08:58:02.190619 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:58:02.190637 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 08:58:02.190655 kernel: SELinux: Initializing. Jul 2 08:58:02.190683 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:58:02.190702 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:58:02.190720 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:58:02.190738 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:58:02.190756 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:58:02.190775 kernel: rcu: Max phase no-delay instances is 400. Jul 2 08:58:02.190793 kernel: Platform MSI: ITS@0x10080000 domain created Jul 2 08:58:02.190811 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 2 08:58:02.190829 kernel: Remapping and enabling EFI services. Jul 2 08:58:02.190852 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:58:02.190870 kernel: Detected PIPT I-cache on CPU1 Jul 2 08:58:02.190894 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 2 08:58:02.190913 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jul 2 08:58:02.190931 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 2 08:58:02.190949 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:58:02.190966 kernel: SMP: Total of 2 processors activated. Jul 2 08:58:02.190984 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 08:58:02.191002 kernel: CPU features: detected: 32-bit EL1 Support Jul 2 08:58:02.191027 kernel: CPU features: detected: CRC32 instructions Jul 2 08:58:02.191046 kernel: CPU: All CPU(s) started at EL1 Jul 2 08:58:02.191078 kernel: alternatives: applying system-wide alternatives Jul 2 08:58:02.191101 kernel: devtmpfs: initialized Jul 2 08:58:02.191122 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:58:02.191141 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:58:02.191162 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:58:02.191184 kernel: SMBIOS 3.0.0 present. Jul 2 08:58:02.191206 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 2 08:58:02.191229 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:58:02.191248 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 08:58:02.191267 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 08:58:02.191285 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 08:58:02.191304 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:58:02.191323 kernel: audit: type=2000 audit(0.294:1): state=initialized audit_enabled=0 res=1 Jul 2 08:58:02.191341 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:58:02.191365 kernel: cpuidle: using governor menu Jul 2 08:58:02.191389 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 08:58:02.191409 kernel: ASID allocator initialised with 65536 entries Jul 2 08:58:02.191470 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:58:02.191493 kernel: Serial: AMBA PL011 UART driver Jul 2 08:58:02.191512 kernel: Modules: 17600 pages in range for non-PLT usage Jul 2 08:58:02.191530 kernel: Modules: 509120 pages in range for PLT usage Jul 2 08:58:02.191549 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:58:02.191568 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 08:58:02.191594 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 08:58:02.191613 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 08:58:02.191632 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:58:02.191650 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 08:58:02.191669 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 08:58:02.191687 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 08:58:02.191706 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:58:02.191724 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:58:02.191743 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:58:02.191766 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:58:02.191785 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:58:02.191804 kernel: ACPI: Interpreter enabled Jul 2 08:58:02.191822 kernel: ACPI: Using GIC for interrupt routing Jul 2 08:58:02.191841 kernel: ACPI: MCFG table detected, 1 entries Jul 2 08:58:02.191859 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 2 08:58:02.192170 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:58:02.192397 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 08:58:02.194690 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 08:58:02.194922 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 2 08:58:02.195128 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 2 08:58:02.195155 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 2 08:58:02.195175 kernel: acpiphp: Slot [1] registered Jul 2 08:58:02.195194 kernel: acpiphp: Slot [2] registered Jul 2 08:58:02.195212 kernel: acpiphp: Slot [3] registered Jul 2 08:58:02.195231 kernel: acpiphp: Slot [4] registered Jul 2 08:58:02.195250 kernel: acpiphp: Slot [5] registered Jul 2 08:58:02.195279 kernel: acpiphp: Slot [6] registered Jul 2 08:58:02.195298 kernel: acpiphp: Slot [7] registered Jul 2 08:58:02.195316 kernel: acpiphp: Slot [8] registered Jul 2 08:58:02.195335 kernel: acpiphp: Slot [9] registered Jul 2 08:58:02.195353 kernel: acpiphp: Slot [10] registered Jul 2 08:58:02.195372 kernel: acpiphp: Slot [11] registered Jul 2 08:58:02.195390 kernel: acpiphp: Slot [12] registered Jul 2 08:58:02.195408 kernel: acpiphp: Slot [13] registered Jul 2 08:58:02.195475 kernel: acpiphp: Slot [14] registered Jul 2 08:58:02.195504 kernel: acpiphp: Slot [15] registered Jul 2 08:58:02.195523 kernel: acpiphp: Slot [16] registered Jul 2 08:58:02.195542 kernel: acpiphp: Slot [17] registered Jul 2 08:58:02.195560 kernel: acpiphp: Slot [18] registered Jul 2 08:58:02.195579 kernel: acpiphp: Slot [19] registered Jul 2 08:58:02.195597 kernel: acpiphp: Slot [20] registered Jul 2 08:58:02.195615 kernel: acpiphp: Slot [21] registered Jul 2 08:58:02.195634 kernel: acpiphp: Slot [22] registered Jul 2 08:58:02.195652 kernel: acpiphp: Slot [23] registered Jul 2 08:58:02.195670 kernel: acpiphp: Slot [24] registered Jul 2 08:58:02.195694 kernel: acpiphp: Slot [25] registered Jul 2 08:58:02.195713 kernel: acpiphp: Slot [26] registered Jul 2 08:58:02.195732 kernel: acpiphp: Slot [27] registered Jul 2 08:58:02.195750 kernel: acpiphp: Slot [28] registered Jul 2 08:58:02.195768 kernel: acpiphp: Slot [29] registered Jul 2 08:58:02.195787 kernel: acpiphp: Slot [30] registered Jul 2 08:58:02.195805 kernel: acpiphp: Slot [31] registered Jul 2 08:58:02.195823 kernel: PCI host bridge to bus 0000:00 Jul 2 08:58:02.196062 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 2 08:58:02.196262 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 08:58:02.197564 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 2 08:58:02.197797 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 2 08:58:02.198077 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 2 08:58:02.198326 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 2 08:58:02.199629 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 2 08:58:02.199878 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 08:58:02.200088 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 2 08:58:02.200293 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:58:02.202667 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 08:58:02.202924 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 2 08:58:02.203142 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 2 08:58:02.203364 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 2 08:58:02.203674 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:58:02.203886 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 2 08:58:02.204101 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 2 08:58:02.204316 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 2 08:58:02.204559 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 2 08:58:02.206591 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 2 08:58:02.206811 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 2 08:58:02.207011 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 08:58:02.207200 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 2 08:58:02.207226 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 08:58:02.207246 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 08:58:02.207266 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 08:58:02.207504 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 08:58:02.207524 kernel: iommu: Default domain type: Translated Jul 2 08:58:02.207543 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 08:58:02.207569 kernel: efivars: Registered efivars operations Jul 2 08:58:02.207588 kernel: vgaarb: loaded Jul 2 08:58:02.207607 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 08:58:02.207625 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:58:02.207644 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:58:02.207662 kernel: pnp: PnP ACPI init Jul 2 08:58:02.207879 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 2 08:58:02.207907 kernel: pnp: PnP ACPI: found 1 devices Jul 2 08:58:02.207932 kernel: NET: Registered PF_INET protocol family Jul 2 08:58:02.207951 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 08:58:02.207970 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 08:58:02.207990 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:58:02.208008 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:58:02.208027 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 08:58:02.208046 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 08:58:02.208064 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:58:02.208083 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:58:02.208106 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:58:02.208125 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:58:02.208144 kernel: kvm [1]: HYP mode not available Jul 2 08:58:02.208163 kernel: Initialise system trusted keyrings Jul 2 08:58:02.208181 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 08:58:02.208200 kernel: Key type asymmetric registered Jul 2 08:58:02.208218 kernel: Asymmetric key parser 'x509' registered Jul 2 08:58:02.208237 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 08:58:02.208256 kernel: io scheduler mq-deadline registered Jul 2 08:58:02.208279 kernel: io scheduler kyber registered Jul 2 08:58:02.208298 kernel: io scheduler bfq registered Jul 2 08:58:02.210129 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 2 08:58:02.210178 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 08:58:02.210197 kernel: ACPI: button: Power Button [PWRB] Jul 2 08:58:02.210217 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 2 08:58:02.210236 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 08:58:02.210254 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:58:02.210285 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 2 08:58:02.211716 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 2 08:58:02.211752 kernel: printk: console [ttyS0] disabled Jul 2 08:58:02.211772 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 2 08:58:02.211791 kernel: printk: console [ttyS0] enabled Jul 2 08:58:02.211810 kernel: printk: bootconsole [uart0] disabled Jul 2 08:58:02.211829 kernel: thunder_xcv, ver 1.0 Jul 2 08:58:02.211849 kernel: thunder_bgx, ver 1.0 Jul 2 08:58:02.211868 kernel: nicpf, ver 1.0 Jul 2 08:58:02.211886 kernel: nicvf, ver 1.0 Jul 2 08:58:02.212140 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 08:58:02.212340 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T08:58:01 UTC (1719910681) Jul 2 08:58:02.212366 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 08:58:02.212386 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 2 08:58:02.212405 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 08:58:02.212456 kernel: watchdog: Hard watchdog permanently disabled Jul 2 08:58:02.212478 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:58:02.212498 kernel: Segment Routing with IPv6 Jul 2 08:58:02.212524 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:58:02.212543 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:58:02.212561 kernel: Key type dns_resolver registered Jul 2 08:58:02.212580 kernel: registered taskstats version 1 Jul 2 08:58:02.212598 kernel: Loading compiled-in X.509 certificates Jul 2 08:58:02.212617 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 08:58:02.212635 kernel: Key type .fscrypt registered Jul 2 08:58:02.212653 kernel: Key type fscrypt-provisioning registered Jul 2 08:58:02.212671 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:58:02.212695 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:58:02.212713 kernel: ima: No architecture policies found Jul 2 08:58:02.212732 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 08:58:02.212750 kernel: clk: Disabling unused clocks Jul 2 08:58:02.212769 kernel: Freeing unused kernel memory: 39040K Jul 2 08:58:02.212787 kernel: Run /init as init process Jul 2 08:58:02.212805 kernel: with arguments: Jul 2 08:58:02.212823 kernel: /init Jul 2 08:58:02.212841 kernel: with environment: Jul 2 08:58:02.212864 kernel: HOME=/ Jul 2 08:58:02.212883 kernel: TERM=linux Jul 2 08:58:02.212901 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:58:02.212924 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:58:02.212947 systemd[1]: Detected virtualization amazon. Jul 2 08:58:02.212968 systemd[1]: Detected architecture arm64. Jul 2 08:58:02.212988 systemd[1]: Running in initrd. Jul 2 08:58:02.213008 systemd[1]: No hostname configured, using default hostname. Jul 2 08:58:02.213032 systemd[1]: Hostname set to . Jul 2 08:58:02.213053 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:58:02.213073 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:58:02.213093 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:58:02.213114 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:58:02.213135 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 08:58:02.213155 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:58:02.213180 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 08:58:02.213202 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 08:58:02.213225 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 08:58:02.213246 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 08:58:02.213266 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:58:02.213287 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:58:02.213307 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:58:02.213332 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:58:02.213352 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:58:02.213372 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:58:02.213392 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:58:02.213412 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:58:02.216104 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:58:02.216283 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:58:02.216306 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:58:02.216327 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:58:02.216357 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:58:02.216378 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:58:02.216398 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 08:58:02.216456 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:58:02.216483 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 08:58:02.216505 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:58:02.216526 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:58:02.216546 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:58:02.216576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:58:02.216597 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 08:58:02.216667 systemd-journald[251]: Collecting audit messages is disabled. Jul 2 08:58:02.216713 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:58:02.216739 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:58:02.216762 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:58:02.216784 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:58:02.216805 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:58:02.216825 systemd-journald[251]: Journal started Jul 2 08:58:02.216868 systemd-journald[251]: Runtime Journal (/run/log/journal/ec27e8cb911dcd4eccb035efc58d82a9) is 8.0M, max 75.3M, 67.3M free. Jul 2 08:58:02.205389 systemd-modules-load[252]: Inserted module 'overlay' Jul 2 08:58:02.230451 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:58:02.237493 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:58:02.241465 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:58:02.241548 kernel: Bridge firewalling registered Jul 2 08:58:02.241628 systemd-modules-load[252]: Inserted module 'br_netfilter' Jul 2 08:58:02.245950 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:58:02.252395 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:58:02.268882 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:58:02.279713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:58:02.294545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:58:02.300243 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:58:02.318152 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 08:58:02.323979 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:58:02.337576 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:58:02.349716 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:58:02.369941 dracut-cmdline[282]: dracut-dracut-053 Jul 2 08:58:02.375578 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 08:58:02.424802 systemd-resolved[287]: Positive Trust Anchors: Jul 2 08:58:02.424838 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:58:02.424899 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:58:02.520454 kernel: SCSI subsystem initialized Jul 2 08:58:02.526459 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:58:02.539465 kernel: iscsi: registered transport (tcp) Jul 2 08:58:02.561737 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:58:02.561825 kernel: QLogic iSCSI HBA Driver Jul 2 08:58:02.651455 kernel: random: crng init done Jul 2 08:58:02.651620 systemd-resolved[287]: Defaulting to hostname 'linux'. Jul 2 08:58:02.655021 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:58:02.659755 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:58:02.680646 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 08:58:02.689735 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 08:58:02.728078 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:58:02.728168 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:58:02.728197 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 08:58:02.795486 kernel: raid6: neonx8 gen() 6651 MB/s Jul 2 08:58:02.812457 kernel: raid6: neonx4 gen() 6458 MB/s Jul 2 08:58:02.829453 kernel: raid6: neonx2 gen() 5377 MB/s Jul 2 08:58:02.846454 kernel: raid6: neonx1 gen() 3931 MB/s Jul 2 08:58:02.863453 kernel: raid6: int64x8 gen() 3783 MB/s Jul 2 08:58:02.880454 kernel: raid6: int64x4 gen() 3688 MB/s Jul 2 08:58:02.897453 kernel: raid6: int64x2 gen() 3556 MB/s Jul 2 08:58:02.915110 kernel: raid6: int64x1 gen() 2762 MB/s Jul 2 08:58:02.915154 kernel: raid6: using algorithm neonx8 gen() 6651 MB/s Jul 2 08:58:02.933093 kernel: raid6: .... xor() 4909 MB/s, rmw enabled Jul 2 08:58:02.933136 kernel: raid6: using neon recovery algorithm Jul 2 08:58:02.942136 kernel: xor: measuring software checksum speed Jul 2 08:58:02.942188 kernel: 8regs : 11029 MB/sec Jul 2 08:58:02.944451 kernel: 32regs : 11939 MB/sec Jul 2 08:58:02.946462 kernel: arm64_neon : 9339 MB/sec Jul 2 08:58:02.946496 kernel: xor: using function: 32regs (11939 MB/sec) Jul 2 08:58:03.033467 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 08:58:03.052338 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:58:03.061771 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:58:03.105334 systemd-udevd[469]: Using default interface naming scheme 'v255'. Jul 2 08:58:03.114813 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:58:03.127004 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 08:58:03.165082 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jul 2 08:58:03.221702 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:58:03.232826 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:58:03.352503 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:58:03.367102 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 08:58:03.418222 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 08:58:03.424397 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:58:03.430285 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:58:03.432637 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:58:03.446104 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 08:58:03.491336 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:58:03.533093 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 08:58:03.533163 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 2 08:58:03.552162 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 08:58:03.552500 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 08:58:03.552757 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:13:04:5c:6c:ed Jul 2 08:58:03.559649 (udev-worker)[515]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:58:03.585298 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:58:03.587734 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:58:03.594131 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:58:03.597474 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 2 08:58:03.602457 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 08:58:03.599506 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:58:03.599784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:58:03.602991 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:58:03.619939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:58:03.627939 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 08:58:03.634480 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:58:03.634568 kernel: GPT:9289727 != 16777215 Jul 2 08:58:03.637524 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:58:03.638446 kernel: GPT:9289727 != 16777215 Jul 2 08:58:03.638483 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:58:03.638509 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:58:03.651488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:58:03.662793 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:58:03.707549 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:58:03.759471 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (520) Jul 2 08:58:03.784529 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 08:58:03.801473 kernel: BTRFS: device fsid ad4b0605-c88d-4cc1-aa96-32e9393058b1 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (537) Jul 2 08:58:03.881704 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 08:58:03.898693 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 08:58:03.912177 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 08:58:03.914612 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 08:58:03.937816 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 08:58:03.956010 disk-uuid[658]: Primary Header is updated. Jul 2 08:58:03.956010 disk-uuid[658]: Secondary Entries is updated. Jul 2 08:58:03.956010 disk-uuid[658]: Secondary Header is updated. Jul 2 08:58:03.965406 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:58:03.985522 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:58:03.993468 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:58:04.999000 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:58:04.999747 disk-uuid[660]: The operation has completed successfully. Jul 2 08:58:05.187685 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:58:05.189823 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 08:58:05.228669 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 08:58:05.235260 sh[1003]: Success Jul 2 08:58:05.263705 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 08:58:05.363997 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 08:58:05.381646 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 08:58:05.387348 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 08:58:05.425501 kernel: BTRFS info (device dm-0): first mount of filesystem ad4b0605-c88d-4cc1-aa96-32e9393058b1 Jul 2 08:58:05.425568 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:58:05.425595 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 08:58:05.426795 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 08:58:05.427792 kernel: BTRFS info (device dm-0): using free space tree Jul 2 08:58:05.550456 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 08:58:05.584996 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 08:58:05.588646 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 08:58:05.596730 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 08:58:05.609845 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 08:58:05.636983 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:58:05.637071 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:58:05.638526 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:58:05.643631 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:58:05.665001 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:58:05.667216 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:58:05.692100 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 08:58:05.701933 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 08:58:05.786480 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:58:05.801875 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:58:05.850687 systemd-networkd[1196]: lo: Link UP Jul 2 08:58:05.852303 systemd-networkd[1196]: lo: Gained carrier Jul 2 08:58:05.856219 systemd-networkd[1196]: Enumeration completed Jul 2 08:58:05.856368 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:58:05.858491 systemd[1]: Reached target network.target - Network. Jul 2 08:58:05.860805 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:58:05.860812 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:58:05.867271 systemd-networkd[1196]: eth0: Link UP Jul 2 08:58:05.867279 systemd-networkd[1196]: eth0: Gained carrier Jul 2 08:58:05.867297 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:58:05.886513 systemd-networkd[1196]: eth0: DHCPv4 address 172.31.30.27/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 08:58:06.019077 ignition[1128]: Ignition 2.18.0 Jul 2 08:58:06.019112 ignition[1128]: Stage: fetch-offline Jul 2 08:58:06.020710 ignition[1128]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:06.020741 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:06.022282 ignition[1128]: Ignition finished successfully Jul 2 08:58:06.029238 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:58:06.039669 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 08:58:06.066017 ignition[1207]: Ignition 2.18.0 Jul 2 08:58:06.066046 ignition[1207]: Stage: fetch Jul 2 08:58:06.067174 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:06.067200 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:06.067339 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:06.077411 ignition[1207]: PUT result: OK Jul 2 08:58:06.083177 ignition[1207]: parsed url from cmdline: "" Jul 2 08:58:06.083194 ignition[1207]: no config URL provided Jul 2 08:58:06.083209 ignition[1207]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:58:06.083234 ignition[1207]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:58:06.083265 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:06.088143 ignition[1207]: PUT result: OK Jul 2 08:58:06.089330 ignition[1207]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 08:58:06.091650 ignition[1207]: GET result: OK Jul 2 08:58:06.094669 ignition[1207]: parsing config with SHA512: 086e771c76b8c0bc659b49658cb289943e4ace4230ac5a120ad16df0b1b9801589c2f089f74f8cf68dd9e869ba454d59e758b15df6190a26088313cf77827db9 Jul 2 08:58:06.101584 unknown[1207]: fetched base config from "system" Jul 2 08:58:06.101612 unknown[1207]: fetched base config from "system" Jul 2 08:58:06.101626 unknown[1207]: fetched user config from "aws" Jul 2 08:58:06.105409 ignition[1207]: fetch: fetch complete Jul 2 08:58:06.105651 ignition[1207]: fetch: fetch passed Jul 2 08:58:06.106179 ignition[1207]: Ignition finished successfully Jul 2 08:58:06.114480 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 08:58:06.127695 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 08:58:06.152478 ignition[1214]: Ignition 2.18.0 Jul 2 08:58:06.152501 ignition[1214]: Stage: kargs Jul 2 08:58:06.153090 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:06.153115 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:06.153241 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:06.158531 ignition[1214]: PUT result: OK Jul 2 08:58:06.166644 ignition[1214]: kargs: kargs passed Jul 2 08:58:06.166801 ignition[1214]: Ignition finished successfully Jul 2 08:58:06.172502 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 08:58:06.180725 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 08:58:06.205530 ignition[1221]: Ignition 2.18.0 Jul 2 08:58:06.205559 ignition[1221]: Stage: disks Jul 2 08:58:06.207142 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:06.207169 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:06.207842 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:06.210482 ignition[1221]: PUT result: OK Jul 2 08:58:06.218354 ignition[1221]: disks: disks passed Jul 2 08:58:06.219659 ignition[1221]: Ignition finished successfully Jul 2 08:58:06.223303 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 08:58:06.226880 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 08:58:06.229043 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:58:06.231335 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:58:06.233164 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:58:06.235307 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:58:06.247284 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 08:58:06.303990 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 08:58:06.311348 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 08:58:06.324724 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 08:58:06.412482 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c1692a6b-74d8-4bda-be0c-9d706985f1ed r/w with ordered data mode. Quota mode: none. Jul 2 08:58:06.414003 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 08:58:06.416309 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 08:58:06.438582 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:58:06.441674 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 08:58:06.446632 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 08:58:06.446717 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:58:06.446768 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:58:06.464204 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 08:58:06.488908 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1249) Jul 2 08:58:06.488983 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:58:06.489341 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 08:58:06.501569 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:58:06.501610 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:58:06.501637 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:58:06.505167 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:58:06.879770 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:58:06.898526 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:58:06.906415 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:58:06.915338 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:58:07.214572 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 08:58:07.231762 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 08:58:07.238724 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 08:58:07.251505 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:58:07.251774 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 08:58:07.297900 ignition[1362]: INFO : Ignition 2.18.0 Jul 2 08:58:07.297900 ignition[1362]: INFO : Stage: mount Jul 2 08:58:07.302099 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:07.302099 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:07.302099 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:07.303747 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 08:58:07.313414 ignition[1362]: INFO : PUT result: OK Jul 2 08:58:07.317836 ignition[1362]: INFO : mount: mount passed Jul 2 08:58:07.319612 ignition[1362]: INFO : Ignition finished successfully Jul 2 08:58:07.321331 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 08:58:07.336876 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 08:58:07.357746 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:58:07.383706 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1375) Jul 2 08:58:07.387174 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:58:07.387215 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:58:07.387242 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:58:07.392461 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:58:07.396483 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:58:07.441976 ignition[1393]: INFO : Ignition 2.18.0 Jul 2 08:58:07.441976 ignition[1393]: INFO : Stage: files Jul 2 08:58:07.444965 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:07.444965 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:07.444965 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:07.461961 ignition[1393]: INFO : PUT result: OK Jul 2 08:58:07.466359 ignition[1393]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:58:07.468663 ignition[1393]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:58:07.468663 ignition[1393]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:58:07.487867 ignition[1393]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:58:07.493186 ignition[1393]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:58:07.495849 unknown[1393]: wrote ssh authorized keys file for user: core Jul 2 08:58:07.499838 ignition[1393]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:58:07.503907 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:58:07.503907 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 08:58:07.536659 systemd-networkd[1196]: eth0: Gained IPv6LL Jul 2 08:58:07.568774 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:58:07.662880 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:58:07.666583 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:58:07.666583 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:58:07.666583 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:58:07.666583 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:58:07.666583 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:58:07.666583 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:58:07.666583 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:58:07.666583 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:58:07.666583 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:58:07.693747 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:58:07.693747 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 08:58:07.693747 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 08:58:07.693747 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 08:58:07.693747 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jul 2 08:58:08.168831 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 08:58:08.511870 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 08:58:08.511870 ignition[1393]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 08:58:08.518546 ignition[1393]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:58:08.518546 ignition[1393]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:58:08.518546 ignition[1393]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 08:58:08.518546 ignition[1393]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:58:08.518546 ignition[1393]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:58:08.518546 ignition[1393]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:58:08.535309 ignition[1393]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:58:08.535309 ignition[1393]: INFO : files: files passed Jul 2 08:58:08.535309 ignition[1393]: INFO : Ignition finished successfully Jul 2 08:58:08.541831 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 08:58:08.566898 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 08:58:08.574059 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 08:58:08.585822 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:58:08.587536 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 08:58:08.605397 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:58:08.605397 initrd-setup-root-after-ignition[1421]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:58:08.613202 initrd-setup-root-after-ignition[1425]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:58:08.618503 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:58:08.623308 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 08:58:08.638828 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 08:58:08.684063 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:58:08.685685 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 08:58:08.691328 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 08:58:08.694919 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 08:58:08.696861 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 08:58:08.712727 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 08:58:08.750508 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:58:08.762702 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 08:58:08.787783 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:58:08.792058 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:58:08.796369 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 08:58:08.799639 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:58:08.799882 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:58:08.802572 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 08:58:08.809864 systemd[1]: Stopped target basic.target - Basic System. Jul 2 08:58:08.813287 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 08:58:08.815856 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:58:08.821497 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 08:58:08.824352 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 08:58:08.829778 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:58:08.832593 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 08:58:08.838415 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 08:58:08.841247 systemd[1]: Stopped target swap.target - Swaps. Jul 2 08:58:08.845297 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:58:08.845565 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:58:08.848584 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:58:08.855919 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:58:08.858325 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 08:58:08.861906 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:58:08.864999 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:58:08.865248 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 08:58:08.873514 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:58:08.873933 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:58:08.878260 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:58:08.878513 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 08:58:08.902706 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 08:58:08.917118 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 08:58:08.918083 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:58:08.918353 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:58:08.925827 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:58:08.927538 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:58:08.939029 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:58:08.939220 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 08:58:08.952396 ignition[1445]: INFO : Ignition 2.18.0 Jul 2 08:58:08.952396 ignition[1445]: INFO : Stage: umount Jul 2 08:58:08.952396 ignition[1445]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:08.952396 ignition[1445]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:08.952396 ignition[1445]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:08.967168 ignition[1445]: INFO : PUT result: OK Jul 2 08:58:08.974699 ignition[1445]: INFO : umount: umount passed Jul 2 08:58:08.977195 ignition[1445]: INFO : Ignition finished successfully Jul 2 08:58:08.978793 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:58:08.979806 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 08:58:08.988648 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:58:08.988829 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 08:58:08.991266 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:58:08.991360 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 08:58:08.998518 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:58:08.998623 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 08:58:09.001652 systemd[1]: Stopped target network.target - Network. Jul 2 08:58:09.004842 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:58:09.004952 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:58:09.011641 systemd[1]: Stopped target paths.target - Path Units. Jul 2 08:58:09.020119 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:58:09.023580 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:58:09.026685 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 08:58:09.028682 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 08:58:09.030873 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:58:09.030957 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:58:09.036948 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:58:09.037028 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:58:09.039682 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:58:09.039777 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 08:58:09.041674 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 08:58:09.041774 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 08:58:09.044071 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 08:58:09.044563 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 08:58:09.051067 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:58:09.060486 systemd-networkd[1196]: eth0: DHCPv6 lease lost Jul 2 08:58:09.066985 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:58:09.067236 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 08:58:09.071150 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:58:09.075832 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 08:58:09.080518 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:58:09.080640 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:58:09.098623 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 08:58:09.108283 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:58:09.108406 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:58:09.110681 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:58:09.110766 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:58:09.112829 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:58:09.113337 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 08:58:09.118194 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 08:58:09.118294 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:58:09.125549 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:58:09.159110 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:58:09.159508 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:58:09.163265 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:58:09.166982 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 08:58:09.174805 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:58:09.174966 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 08:58:09.180533 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:58:09.180613 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:58:09.182533 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:58:09.182626 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:58:09.184723 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:58:09.184805 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 08:58:09.186828 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:58:09.186915 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:58:09.198109 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 08:58:09.203545 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:58:09.203665 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:58:09.206253 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 08:58:09.206336 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:58:09.208840 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:58:09.208920 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:58:09.211361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:58:09.211463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:58:09.214457 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:58:09.214647 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 08:58:09.219293 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:58:09.219477 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 08:58:09.259603 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:58:09.259815 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 08:58:09.265975 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 08:58:09.284731 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 08:58:09.305760 systemd[1]: Switching root. Jul 2 08:58:09.369373 systemd-journald[251]: Journal stopped Jul 2 08:58:12.221940 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jul 2 08:58:12.222077 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:58:12.222123 kernel: SELinux: policy capability open_perms=1 Jul 2 08:58:12.222160 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:58:12.222196 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:58:12.222230 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:58:12.222263 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:58:12.222306 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:58:12.222348 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:58:12.222382 kernel: audit: type=1403 audit(1719910690.353:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:58:12.222456 systemd[1]: Successfully loaded SELinux policy in 63.054ms. Jul 2 08:58:12.222504 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.194ms. Jul 2 08:58:12.222546 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:58:12.222581 systemd[1]: Detected virtualization amazon. Jul 2 08:58:12.222614 systemd[1]: Detected architecture arm64. Jul 2 08:58:12.222655 systemd[1]: Detected first boot. Jul 2 08:58:12.222686 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:58:12.222719 zram_generator::config[1488]: No configuration found. Jul 2 08:58:12.222754 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:58:12.222785 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:58:12.222817 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 08:58:12.222853 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:58:12.222887 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 08:58:12.222919 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 08:58:12.222951 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 08:58:12.222983 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 08:58:12.223015 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 08:58:12.223046 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 08:58:12.223076 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 08:58:12.223120 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 08:58:12.223151 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:58:12.223181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:58:12.223212 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 08:58:12.223242 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 08:58:12.223272 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 08:58:12.223303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:58:12.223334 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 08:58:12.223368 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:58:12.223402 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 08:58:12.225501 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 08:58:12.225554 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 08:58:12.225584 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 08:58:12.225614 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:58:12.225649 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:58:12.225679 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:58:12.225726 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:58:12.225771 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 08:58:12.225804 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 08:58:12.225837 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:58:12.225866 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:58:12.225895 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:58:12.225926 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 08:58:12.225957 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 08:58:12.225986 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 08:58:12.226015 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 08:58:12.226049 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 08:58:12.226080 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 08:58:12.226112 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 08:58:12.226142 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:58:12.226172 systemd[1]: Reached target machines.target - Containers. Jul 2 08:58:12.226205 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 08:58:12.226236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:58:12.226265 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:58:12.226299 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 08:58:12.226328 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:58:12.226359 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:58:12.226388 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:58:12.226418 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 08:58:12.226471 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:58:12.226502 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:58:12.226534 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:58:12.226564 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 08:58:12.226602 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:58:12.226634 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:58:12.226665 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:58:12.226695 kernel: loop: module loaded Jul 2 08:58:12.226727 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:58:12.226759 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 08:58:12.226790 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 08:58:12.226822 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:58:12.226852 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:58:12.226887 systemd[1]: Stopped verity-setup.service. Jul 2 08:58:12.226918 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 08:58:12.226947 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 08:58:12.226978 kernel: fuse: init (API version 7.39) Jul 2 08:58:12.227006 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 08:58:12.227040 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 08:58:12.227069 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 08:58:12.227109 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 08:58:12.227142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:58:12.227171 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:58:12.227201 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 08:58:12.227231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:58:12.227260 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:58:12.227295 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:58:12.227326 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:58:12.227355 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:58:12.227385 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 08:58:12.227414 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:58:12.229526 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:58:12.229582 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:58:12.229615 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 08:58:12.229646 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 08:58:12.229678 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 08:58:12.229724 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 08:58:12.229815 systemd-journald[1569]: Collecting audit messages is disabled. Jul 2 08:58:12.229871 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 08:58:12.229908 systemd-journald[1569]: Journal started Jul 2 08:58:12.229957 systemd-journald[1569]: Runtime Journal (/run/log/journal/ec27e8cb911dcd4eccb035efc58d82a9) is 8.0M, max 75.3M, 67.3M free. Jul 2 08:58:11.531202 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:58:11.599810 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 08:58:11.600641 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:58:12.237520 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:58:12.241589 kernel: ACPI: bus type drm_connector registered Jul 2 08:58:12.241652 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:58:12.257481 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 08:58:12.269767 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 08:58:12.286625 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 08:58:12.289471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:58:12.312294 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 08:58:12.312384 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:58:12.327672 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 08:58:12.327776 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:58:12.343486 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:58:12.358548 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 08:58:12.372784 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:58:12.380476 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:58:12.382843 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 08:58:12.385561 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:58:12.385906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:58:12.388263 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 08:58:12.390692 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 08:58:12.393339 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 08:58:12.396625 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 08:58:12.466650 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 08:58:12.483852 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 08:58:12.495682 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 08:58:12.519836 kernel: loop0: detected capacity change from 0 to 59672 Jul 2 08:58:12.519927 kernel: block loop0: the capability attribute has been deprecated. Jul 2 08:58:12.522595 systemd-journald[1569]: Time spent on flushing to /var/log/journal/ec27e8cb911dcd4eccb035efc58d82a9 is 40.992ms for 913 entries. Jul 2 08:58:12.522595 systemd-journald[1569]: System Journal (/var/log/journal/ec27e8cb911dcd4eccb035efc58d82a9) is 8.0M, max 195.6M, 187.6M free. Jul 2 08:58:12.573037 systemd-journald[1569]: Received client request to flush runtime journal. Jul 2 08:58:12.544380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:58:12.583560 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 08:58:12.586983 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:58:12.600786 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 08:58:12.606919 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jul 2 08:58:12.606954 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jul 2 08:58:12.620569 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:58:12.634483 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:58:12.639669 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 08:58:12.644624 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:58:12.646238 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 08:58:12.668813 udevadm[1631]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 08:58:12.684501 kernel: loop1: detected capacity change from 0 to 51896 Jul 2 08:58:12.713256 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 08:58:12.727312 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:58:12.776072 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Jul 2 08:58:12.776754 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Jul 2 08:58:12.786757 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:58:12.808547 kernel: loop2: detected capacity change from 0 to 113672 Jul 2 08:58:12.900496 kernel: loop3: detected capacity change from 0 to 194512 Jul 2 08:58:12.935766 kernel: loop4: detected capacity change from 0 to 59672 Jul 2 08:58:12.958471 kernel: loop5: detected capacity change from 0 to 51896 Jul 2 08:58:12.978540 kernel: loop6: detected capacity change from 0 to 113672 Jul 2 08:58:13.009491 kernel: loop7: detected capacity change from 0 to 194512 Jul 2 08:58:13.030198 (sd-merge)[1645]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 08:58:13.031223 (sd-merge)[1645]: Merged extensions into '/usr'. Jul 2 08:58:13.041580 systemd[1]: Reloading requested from client PID 1598 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 08:58:13.041606 systemd[1]: Reloading... Jul 2 08:58:13.201489 zram_generator::config[1669]: No configuration found. Jul 2 08:58:13.478248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:58:13.605467 systemd[1]: Reloading finished in 562 ms. Jul 2 08:58:13.644977 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 08:58:13.660952 systemd[1]: Starting ensure-sysext.service... Jul 2 08:58:13.673880 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:58:13.707708 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Jul 2 08:58:13.707746 systemd[1]: Reloading... Jul 2 08:58:13.738827 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:58:13.739674 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 08:58:13.742027 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:58:13.742694 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Jul 2 08:58:13.742829 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Jul 2 08:58:13.750887 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:58:13.751076 systemd-tmpfiles[1721]: Skipping /boot Jul 2 08:58:13.775783 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:58:13.775803 systemd-tmpfiles[1721]: Skipping /boot Jul 2 08:58:13.863623 zram_generator::config[1744]: No configuration found. Jul 2 08:58:14.118160 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:58:14.205946 ldconfig[1594]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:58:14.231727 systemd[1]: Reloading finished in 523 ms. Jul 2 08:58:14.259494 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 08:58:14.262262 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 08:58:14.268258 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:58:14.290775 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:58:14.297775 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 08:58:14.306141 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 08:58:14.313112 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:58:14.327678 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:58:14.333014 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 08:58:14.347636 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:58:14.361034 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:58:14.368024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:58:14.372545 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:58:14.375024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:58:14.381392 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:58:14.381800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:58:14.396010 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 08:58:14.404863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:58:14.411066 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:58:14.413099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:58:14.413487 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 08:58:14.424485 systemd[1]: Finished ensure-sysext.service. Jul 2 08:58:14.447539 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 08:58:14.466095 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 08:58:14.482732 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 08:58:14.486292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:58:14.487538 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:58:14.493412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:58:14.494257 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:58:14.497237 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:58:14.497576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:58:14.508418 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:58:14.522254 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:58:14.523702 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:58:14.527379 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:58:14.553188 systemd-udevd[1808]: Using default interface naming scheme 'v255'. Jul 2 08:58:14.575108 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 08:58:14.578831 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:58:14.592089 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 08:58:14.596731 augenrules[1837]: No rules Jul 2 08:58:14.603042 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:58:14.609798 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 08:58:14.623632 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:58:14.636736 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:58:14.763254 systemd-networkd[1850]: lo: Link UP Jul 2 08:58:14.763274 systemd-networkd[1850]: lo: Gained carrier Jul 2 08:58:14.764309 systemd-networkd[1850]: Enumeration completed Jul 2 08:58:14.764502 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:58:14.775744 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 08:58:14.828081 systemd-resolved[1807]: Positive Trust Anchors: Jul 2 08:58:14.829538 systemd-resolved[1807]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:58:14.829604 systemd-resolved[1807]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:58:14.855112 systemd-resolved[1807]: Defaulting to hostname 'linux'. Jul 2 08:58:14.864757 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:58:14.868642 systemd[1]: Reached target network.target - Network. Jul 2 08:58:14.870363 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:58:14.876468 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1862) Jul 2 08:58:14.877017 (udev-worker)[1849]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:58:14.905247 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 08:58:14.990358 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:58:14.990379 systemd-networkd[1850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:58:14.994874 systemd-networkd[1850]: eth0: Link UP Jul 2 08:58:14.995170 systemd-networkd[1850]: eth0: Gained carrier Jul 2 08:58:14.995204 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:58:15.019570 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1863) Jul 2 08:58:15.044786 systemd-networkd[1850]: eth0: DHCPv4 address 172.31.30.27/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 08:58:15.219948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:58:15.259133 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 08:58:15.263570 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 08:58:15.274751 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 08:58:15.278789 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 08:58:15.319236 lvm[1968]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:58:15.331573 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 08:58:15.354482 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 08:58:15.358222 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:58:15.365788 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 08:58:15.386035 lvm[1975]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:58:15.388833 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:58:15.392873 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:58:15.396854 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 08:58:15.399687 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 08:58:15.403226 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 08:58:15.410566 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 08:58:15.412929 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 08:58:15.415154 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:58:15.415206 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:58:15.416786 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:58:15.419791 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 08:58:15.424576 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 08:58:15.434903 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 08:58:15.438177 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 08:58:15.440978 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 08:58:15.444029 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:58:15.445818 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:58:15.447717 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:58:15.447877 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:58:15.459856 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 08:58:15.465032 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 08:58:15.472822 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 08:58:15.478686 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 08:58:15.489689 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 08:58:15.491568 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 08:58:15.496754 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 08:58:15.508698 jq[1984]: false Jul 2 08:58:15.514811 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 08:58:15.520220 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 08:58:15.533565 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 08:58:15.539830 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 08:58:15.548326 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 08:58:15.559904 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 08:58:15.562894 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:58:15.563841 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:58:15.565953 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 08:58:15.571698 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 08:58:15.609330 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:58:15.609772 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 08:58:15.643729 dbus-daemon[1983]: [system] SELinux support is enabled Jul 2 08:58:15.644850 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 08:58:15.676330 dbus-daemon[1983]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1850 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 08:58:15.677506 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:58:15.677639 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 08:58:15.677582 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 08:58:15.679985 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:58:15.680029 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 08:58:15.688664 jq[1996]: true Jul 2 08:58:15.698067 (ntainerd)[2009]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 08:58:15.700509 extend-filesystems[1985]: Found loop4 Jul 2 08:58:15.700509 extend-filesystems[1985]: Found loop5 Jul 2 08:58:15.700509 extend-filesystems[1985]: Found loop6 Jul 2 08:58:15.699586 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:58:15.717406 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 08:58:15.730680 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 08:58:15.730680 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 08:58:15.730680 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: ---------------------------------------------------- Jul 2 08:58:15.730680 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Jul 2 08:58:15.730680 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 08:58:15.730680 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: corporation. Support and training for ntp-4 are Jul 2 08:58:15.730680 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: available at https://www.nwtime.org/support Jul 2 08:58:15.730680 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: ---------------------------------------------------- Jul 2 08:58:15.731272 extend-filesystems[1985]: Found loop7 Jul 2 08:58:15.731272 extend-filesystems[1985]: Found nvme0n1 Jul 2 08:58:15.731272 extend-filesystems[1985]: Found nvme0n1p1 Jul 2 08:58:15.731272 extend-filesystems[1985]: Found nvme0n1p2 Jul 2 08:58:15.731272 extend-filesystems[1985]: Found nvme0n1p3 Jul 2 08:58:15.731272 extend-filesystems[1985]: Found usr Jul 2 08:58:15.731272 extend-filesystems[1985]: Found nvme0n1p4 Jul 2 08:58:15.731272 extend-filesystems[1985]: Found nvme0n1p6 Jul 2 08:58:15.731272 extend-filesystems[1985]: Found nvme0n1p7 Jul 2 08:58:15.731272 extend-filesystems[1985]: Found nvme0n1p9 Jul 2 08:58:15.731272 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Jul 2 08:58:15.701800 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 08:58:15.717486 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: proto: precision = 0.096 usec (-23) Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: basedate set to 2024-06-19 Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: gps base set to 2024-06-23 (week 2320) Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: Listen normally on 3 eth0 172.31.30.27:123 Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: Listen normally on 4 lo [::1]:123 Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: bind(21) AF_INET6 fe80::413:4ff:fe5c:6ced%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: unable to create socket on eth0 (5) for fe80::413:4ff:fe5c:6ced%2#123 Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: failed to init interface for address fe80::413:4ff:fe5c:6ced%2 Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:58:15.814016 ntpd[1987]: 2 Jul 08:58:15 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:58:15.753800 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 08:58:15.717508 ntpd[1987]: ---------------------------------------------------- Jul 2 08:58:15.814919 update_engine[1995]: I0702 08:58:15.803313 1995 main.cc:92] Flatcar Update Engine starting Jul 2 08:58:15.814919 update_engine[1995]: I0702 08:58:15.814865 1995 update_check_scheduler.cc:74] Next update check in 10m20s Jul 2 08:58:15.775346 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:58:15.717526 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Jul 2 08:58:15.779982 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 08:58:15.845217 tar[2010]: linux-arm64/helm Jul 2 08:58:15.847598 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Jul 2 08:58:15.869612 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 08:58:15.717544 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 08:58:15.826094 systemd[1]: Started update-engine.service - Update Engine. Jul 2 08:58:15.869970 jq[2017]: true Jul 2 08:58:15.883808 extend-filesystems[2032]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 08:58:15.717563 ntpd[1987]: corporation. Support and training for ntp-4 are Jul 2 08:58:15.831845 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 08:58:15.717581 ntpd[1987]: available at https://www.nwtime.org/support Jul 2 08:58:15.841830 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 08:58:15.717599 ntpd[1987]: ---------------------------------------------------- Jul 2 08:58:15.743089 ntpd[1987]: proto: precision = 0.096 usec (-23) Jul 2 08:58:15.746157 ntpd[1987]: basedate set to 2024-06-19 Jul 2 08:58:15.746189 ntpd[1987]: gps base set to 2024-06-23 (week 2320) Jul 2 08:58:15.770080 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 08:58:15.770156 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 08:58:15.770468 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 08:58:15.770540 ntpd[1987]: Listen normally on 3 eth0 172.31.30.27:123 Jul 2 08:58:15.770607 ntpd[1987]: Listen normally on 4 lo [::1]:123 Jul 2 08:58:15.770679 ntpd[1987]: bind(21) AF_INET6 fe80::413:4ff:fe5c:6ced%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:58:15.770717 ntpd[1987]: unable to create socket on eth0 (5) for fe80::413:4ff:fe5c:6ced%2#123 Jul 2 08:58:15.770745 ntpd[1987]: failed to init interface for address fe80::413:4ff:fe5c:6ced%2 Jul 2 08:58:15.770804 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Jul 2 08:58:15.807253 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:58:15.807306 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:58:15.944345 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 08:58:15.983994 extend-filesystems[2032]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 08:58:15.983994 extend-filesystems[2032]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 08:58:15.983994 extend-filesystems[2032]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 08:58:16.005960 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Jul 2 08:58:15.995102 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:58:15.995475 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 08:58:16.102765 coreos-metadata[1982]: Jul 02 08:58:16.102 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.106 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.107 INFO Fetch successful Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.107 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.108 INFO Fetch successful Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.108 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.109 INFO Fetch successful Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.109 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.111 INFO Fetch successful Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.111 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.112 INFO Fetch failed with 404: resource not found Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.112 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.114 INFO Fetch successful Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.114 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.115 INFO Fetch successful Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.115 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.116 INFO Fetch successful Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.116 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.117 INFO Fetch successful Jul 2 08:58:16.125953 coreos-metadata[1982]: Jul 02 08:58:16.118 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 08:58:16.122129 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 08:58:16.127047 bash[2077]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:58:16.128533 coreos-metadata[1982]: Jul 02 08:58:16.127 INFO Fetch successful Jul 2 08:58:16.127505 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 08:58:16.166775 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1870) Jul 2 08:58:16.167818 systemd[1]: Starting sshkeys.service... Jul 2 08:58:16.202267 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 08:58:16.202313 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 2 08:58:16.212584 systemd-logind[1993]: New seat seat0. Jul 2 08:58:16.220285 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 08:58:16.229947 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 08:58:16.239292 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 08:58:16.368332 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 08:58:16.374074 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 08:58:16.415663 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 08:58:16.418628 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 08:58:16.427958 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2020 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 08:58:16.433752 systemd-networkd[1850]: eth0: Gained IPv6LL Jul 2 08:58:16.459585 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 08:58:16.468027 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 08:58:16.475841 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 08:58:16.487455 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 08:58:16.498989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:16.509096 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 08:58:16.521630 locksmithd[2029]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:58:16.586470 polkitd[2120]: Started polkitd version 121 Jul 2 08:58:16.619850 containerd[2009]: time="2024-07-02T08:58:16.611929331Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 08:58:16.653728 polkitd[2120]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 08:58:16.653863 polkitd[2120]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 08:58:16.660516 polkitd[2120]: Finished loading, compiling and executing 2 rules Jul 2 08:58:16.671707 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 08:58:16.671999 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 08:58:16.682076 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 08:58:16.685875 polkitd[2120]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 08:58:16.724521 amazon-ssm-agent[2132]: Initializing new seelog logger Jul 2 08:58:16.724521 amazon-ssm-agent[2132]: New Seelog Logger Creation Complete Jul 2 08:58:16.725020 amazon-ssm-agent[2132]: 2024/07/02 08:58:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:16.725020 amazon-ssm-agent[2132]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:16.729459 amazon-ssm-agent[2132]: 2024/07/02 08:58:16 processing appconfig overrides Jul 2 08:58:16.729459 amazon-ssm-agent[2132]: 2024/07/02 08:58:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:16.729459 amazon-ssm-agent[2132]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:16.729459 amazon-ssm-agent[2132]: 2024/07/02 08:58:16 processing appconfig overrides Jul 2 08:58:16.729459 amazon-ssm-agent[2132]: 2024/07/02 08:58:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:16.729459 amazon-ssm-agent[2132]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:16.729459 amazon-ssm-agent[2132]: 2024/07/02 08:58:16 processing appconfig overrides Jul 2 08:58:16.729459 amazon-ssm-agent[2132]: 2024-07-02 08:58:16 INFO Proxy environment variables: Jul 2 08:58:16.733652 amazon-ssm-agent[2132]: 2024/07/02 08:58:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:16.733652 amazon-ssm-agent[2132]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:16.733652 amazon-ssm-agent[2132]: 2024/07/02 08:58:16 processing appconfig overrides Jul 2 08:58:16.770754 systemd-hostnamed[2020]: Hostname set to (transient) Jul 2 08:58:16.775044 systemd-resolved[1807]: System hostname changed to 'ip-172-31-30-27'. Jul 2 08:58:16.828445 amazon-ssm-agent[2132]: 2024-07-02 08:58:16 INFO https_proxy: Jul 2 08:58:16.872327 coreos-metadata[2099]: Jul 02 08:58:16.872 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 08:58:16.879380 coreos-metadata[2099]: Jul 02 08:58:16.877 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 08:58:16.889359 coreos-metadata[2099]: Jul 02 08:58:16.882 INFO Fetch successful Jul 2 08:58:16.889359 coreos-metadata[2099]: Jul 02 08:58:16.882 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 08:58:16.889359 coreos-metadata[2099]: Jul 02 08:58:16.888 INFO Fetch successful Jul 2 08:58:16.911805 unknown[2099]: wrote ssh authorized keys file for user: core Jul 2 08:58:16.927101 amazon-ssm-agent[2132]: 2024-07-02 08:58:16 INFO http_proxy: Jul 2 08:58:16.946755 containerd[2009]: time="2024-07-02T08:58:16.946486740Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 08:58:16.946755 containerd[2009]: time="2024-07-02T08:58:16.946589268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:16.964790 containerd[2009]: time="2024-07-02T08:58:16.964708716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:58:16.964790 containerd[2009]: time="2024-07-02T08:58:16.964776924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:16.965199 containerd[2009]: time="2024-07-02T08:58:16.965145144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:58:16.965199 containerd[2009]: time="2024-07-02T08:58:16.965191860Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:58:16.965417 containerd[2009]: time="2024-07-02T08:58:16.965375076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:16.976694 containerd[2009]: time="2024-07-02T08:58:16.976599360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:58:16.976826 containerd[2009]: time="2024-07-02T08:58:16.976669344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:16.976943 containerd[2009]: time="2024-07-02T08:58:16.976901280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:16.977398 containerd[2009]: time="2024-07-02T08:58:16.977345748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:16.977512 containerd[2009]: time="2024-07-02T08:58:16.977401872Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:58:16.977512 containerd[2009]: time="2024-07-02T08:58:16.977491596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:16.977937 containerd[2009]: time="2024-07-02T08:58:16.977787408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:58:16.977937 containerd[2009]: time="2024-07-02T08:58:16.977826804Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:58:16.978243 containerd[2009]: time="2024-07-02T08:58:16.977976564Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:58:16.978243 containerd[2009]: time="2024-07-02T08:58:16.978006192Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:58:16.998517 containerd[2009]: time="2024-07-02T08:58:16.998356236Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:58:16.998517 containerd[2009]: time="2024-07-02T08:58:16.998481564Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:58:16.998517 containerd[2009]: time="2024-07-02T08:58:16.998517180Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:58:16.998722 containerd[2009]: time="2024-07-02T08:58:16.998610624Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 08:58:16.998722 containerd[2009]: time="2024-07-02T08:58:16.998649072Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 08:58:16.998722 containerd[2009]: time="2024-07-02T08:58:16.998674992Z" level=info msg="NRI interface is disabled by configuration." Jul 2 08:58:16.998722 containerd[2009]: time="2024-07-02T08:58:16.998703396Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:58:16.999455 containerd[2009]: time="2024-07-02T08:58:16.998946324Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 08:58:16.999455 containerd[2009]: time="2024-07-02T08:58:16.998996028Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 08:58:16.999455 containerd[2009]: time="2024-07-02T08:58:16.999029064Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 08:58:16.999455 containerd[2009]: time="2024-07-02T08:58:16.999061392Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 08:58:16.999455 containerd[2009]: time="2024-07-02T08:58:16.999095532Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:58:16.999455 containerd[2009]: time="2024-07-02T08:58:16.999132744Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:58:16.999455 containerd[2009]: time="2024-07-02T08:58:16.999164160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:58:16.999455 containerd[2009]: time="2024-07-02T08:58:16.999217308Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:58:16.999455 containerd[2009]: time="2024-07-02T08:58:16.999254544Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:58:16.999455 containerd[2009]: time="2024-07-02T08:58:16.999285672Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:58:16.999455 containerd[2009]: time="2024-07-02T08:58:16.999315096Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:58:16.999455 containerd[2009]: time="2024-07-02T08:58:16.999343428Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:58:17.007598 containerd[2009]: time="2024-07-02T08:58:17.005156109Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.014657277Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.014758089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.014793465Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.014856201Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.014982405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.015018045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.015048537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.015077121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.015108693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.015141597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.015170001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.015203097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.015539 containerd[2009]: time="2024-07-02T08:58:17.015237189Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:58:17.016262 containerd[2009]: time="2024-07-02T08:58:17.015580161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.016262 containerd[2009]: time="2024-07-02T08:58:17.015627525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.016262 containerd[2009]: time="2024-07-02T08:58:17.015659805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.016262 containerd[2009]: time="2024-07-02T08:58:17.015693909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.016262 containerd[2009]: time="2024-07-02T08:58:17.015728433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.016262 containerd[2009]: time="2024-07-02T08:58:17.015763569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.016262 containerd[2009]: time="2024-07-02T08:58:17.015794097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.016262 containerd[2009]: time="2024-07-02T08:58:17.015820497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:58:17.016639 containerd[2009]: time="2024-07-02T08:58:17.016284861Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:58:17.016639 containerd[2009]: time="2024-07-02T08:58:17.016390257Z" level=info msg="Connect containerd service" Jul 2 08:58:17.028549 containerd[2009]: time="2024-07-02T08:58:17.026738073Z" level=info msg="using legacy CRI server" Jul 2 08:58:17.028549 containerd[2009]: time="2024-07-02T08:58:17.026780529Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 08:58:17.028549 containerd[2009]: time="2024-07-02T08:58:17.026961465Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:58:17.027619 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 08:58:17.028842 update-ssh-keys[2201]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:58:17.037154 systemd[1]: Finished sshkeys.service. Jul 2 08:58:17.044012 containerd[2009]: time="2024-07-02T08:58:17.042706473Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:58:17.044012 containerd[2009]: time="2024-07-02T08:58:17.042811809Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:58:17.044012 containerd[2009]: time="2024-07-02T08:58:17.042857181Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 08:58:17.044012 containerd[2009]: time="2024-07-02T08:58:17.042883461Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:58:17.044012 containerd[2009]: time="2024-07-02T08:58:17.042914577Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 08:58:17.044282 amazon-ssm-agent[2132]: 2024-07-02 08:58:16 INFO no_proxy: Jul 2 08:58:17.052446 containerd[2009]: time="2024-07-02T08:58:17.050204997Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:58:17.052875 containerd[2009]: time="2024-07-02T08:58:17.052796973Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:58:17.055285 containerd[2009]: time="2024-07-02T08:58:17.055097325Z" level=info msg="Start subscribing containerd event" Jul 2 08:58:17.055285 containerd[2009]: time="2024-07-02T08:58:17.055238469Z" level=info msg="Start recovering state" Jul 2 08:58:17.055520 containerd[2009]: time="2024-07-02T08:58:17.055377801Z" level=info msg="Start event monitor" Jul 2 08:58:17.059840 containerd[2009]: time="2024-07-02T08:58:17.055412697Z" level=info msg="Start snapshots syncer" Jul 2 08:58:17.060079 containerd[2009]: time="2024-07-02T08:58:17.060036873Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:58:17.063692 containerd[2009]: time="2024-07-02T08:58:17.063620541Z" level=info msg="Start streaming server" Jul 2 08:58:17.064732 containerd[2009]: time="2024-07-02T08:58:17.064403193Z" level=info msg="containerd successfully booted in 0.470342s" Jul 2 08:58:17.064583 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 08:58:17.147447 amazon-ssm-agent[2132]: 2024-07-02 08:58:16 INFO Checking if agent identity type OnPrem can be assumed Jul 2 08:58:17.244504 amazon-ssm-agent[2132]: 2024-07-02 08:58:16 INFO Checking if agent identity type EC2 can be assumed Jul 2 08:58:17.345436 amazon-ssm-agent[2132]: 2024-07-02 08:58:17 INFO Agent will take identity from EC2 Jul 2 08:58:17.444999 amazon-ssm-agent[2132]: 2024-07-02 08:58:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:58:17.453405 sshd_keygen[2028]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:58:17.544712 amazon-ssm-agent[2132]: 2024-07-02 08:58:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:58:17.548638 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 08:58:17.567970 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 08:58:17.585928 systemd[1]: Started sshd@0-172.31.30.27:22-147.75.109.163:41840.service - OpenSSH per-connection server daemon (147.75.109.163:41840). Jul 2 08:58:17.600974 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:58:17.602555 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 08:58:17.614478 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 08:58:17.645131 amazon-ssm-agent[2132]: 2024-07-02 08:58:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:58:17.690655 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 08:58:17.707031 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 08:58:17.719013 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 08:58:17.721909 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 08:58:17.746104 amazon-ssm-agent[2132]: 2024-07-02 08:58:17 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 08:58:17.847533 amazon-ssm-agent[2132]: 2024-07-02 08:58:17 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 2 08:58:17.873998 sshd[2215]: Accepted publickey for core from 147.75.109.163 port 41840 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:17.877228 sshd[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:17.904403 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 08:58:17.918238 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 08:58:17.929163 systemd-logind[1993]: New session 1 of user core. Jul 2 08:58:17.947170 amazon-ssm-agent[2132]: 2024-07-02 08:58:17 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 08:58:17.956034 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 08:58:17.974808 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 08:58:18.000004 (systemd)[2227]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:18.046570 amazon-ssm-agent[2132]: 2024-07-02 08:58:17 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 08:58:18.151690 amazon-ssm-agent[2132]: 2024-07-02 08:58:17 INFO [Registrar] Starting registrar module Jul 2 08:58:18.245487 tar[2010]: linux-arm64/LICENSE Jul 2 08:58:18.248939 tar[2010]: linux-arm64/README.md Jul 2 08:58:18.255785 amazon-ssm-agent[2132]: 2024-07-02 08:58:17 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 08:58:18.279786 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 08:58:18.323728 systemd[2227]: Queued start job for default target default.target. Jul 2 08:58:18.334353 systemd[2227]: Created slice app.slice - User Application Slice. Jul 2 08:58:18.334450 systemd[2227]: Reached target paths.target - Paths. Jul 2 08:58:18.334489 systemd[2227]: Reached target timers.target - Timers. Jul 2 08:58:18.346735 systemd[2227]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 08:58:18.375723 systemd[2227]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 08:58:18.375970 systemd[2227]: Reached target sockets.target - Sockets. Jul 2 08:58:18.376004 systemd[2227]: Reached target basic.target - Basic System. Jul 2 08:58:18.376082 systemd[2227]: Reached target default.target - Main User Target. Jul 2 08:58:18.376143 systemd[2227]: Startup finished in 357ms. Jul 2 08:58:18.377030 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 08:58:18.387962 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 08:58:18.418407 amazon-ssm-agent[2132]: 2024-07-02 08:58:18 INFO [EC2Identity] EC2 registration was successful. Jul 2 08:58:18.439841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:18.443097 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 08:58:18.445832 systemd[1]: Startup finished in 1.153s (kernel) + 8.545s (initrd) + 8.153s (userspace) = 17.852s. Jul 2 08:58:18.455158 amazon-ssm-agent[2132]: 2024-07-02 08:58:18 INFO [CredentialRefresher] credentialRefresher has started Jul 2 08:58:18.457768 amazon-ssm-agent[2132]: 2024-07-02 08:58:18 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 08:58:18.457768 amazon-ssm-agent[2132]: 2024-07-02 08:58:18 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 08:58:18.467040 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:58:18.519795 amazon-ssm-agent[2132]: 2024-07-02 08:58:18 INFO [CredentialRefresher] Next credential rotation will be in 31.016609918966665 minutes Jul 2 08:58:18.554596 systemd[1]: Started sshd@1-172.31.30.27:22-147.75.109.163:41848.service - OpenSSH per-connection server daemon (147.75.109.163:41848). Jul 2 08:58:18.718145 ntpd[1987]: Listen normally on 6 eth0 [fe80::413:4ff:fe5c:6ced%2]:123 Jul 2 08:58:18.719157 ntpd[1987]: 2 Jul 08:58:18 ntpd[1987]: Listen normally on 6 eth0 [fe80::413:4ff:fe5c:6ced%2]:123 Jul 2 08:58:18.740611 sshd[2252]: Accepted publickey for core from 147.75.109.163 port 41848 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:18.742844 sshd[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:18.750594 systemd-logind[1993]: New session 2 of user core. Jul 2 08:58:18.757763 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 08:58:18.888704 sshd[2252]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:18.895347 systemd[1]: sshd@1-172.31.30.27:22-147.75.109.163:41848.service: Deactivated successfully. Jul 2 08:58:18.901468 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:58:18.903807 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:58:18.907159 systemd-logind[1993]: Removed session 2. Jul 2 08:58:18.928850 systemd[1]: Started sshd@2-172.31.30.27:22-147.75.109.163:41850.service - OpenSSH per-connection server daemon (147.75.109.163:41850). Jul 2 08:58:19.103728 sshd[2263]: Accepted publickey for core from 147.75.109.163 port 41850 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:19.106857 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:19.117523 systemd-logind[1993]: New session 3 of user core. Jul 2 08:58:19.123741 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 08:58:19.248784 sshd[2263]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:19.254475 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:58:19.258366 systemd[1]: sshd@2-172.31.30.27:22-147.75.109.163:41850.service: Deactivated successfully. Jul 2 08:58:19.258697 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:58:19.265105 systemd-logind[1993]: Removed session 3. Jul 2 08:58:19.287872 systemd[1]: Started sshd@3-172.31.30.27:22-147.75.109.163:41856.service - OpenSSH per-connection server daemon (147.75.109.163:41856). Jul 2 08:58:19.310781 kubelet[2245]: E0702 08:58:19.310652 2245 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:58:19.315838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:58:19.316203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:58:19.317850 systemd[1]: kubelet.service: Consumed 1.342s CPU time. Jul 2 08:58:19.468063 sshd[2272]: Accepted publickey for core from 147.75.109.163 port 41856 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:19.471252 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:19.480756 systemd-logind[1993]: New session 4 of user core. Jul 2 08:58:19.484745 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 08:58:19.485624 amazon-ssm-agent[2132]: 2024-07-02 08:58:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 08:58:19.586549 amazon-ssm-agent[2132]: 2024-07-02 08:58:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2276) started Jul 2 08:58:19.626690 sshd[2272]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:19.632208 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:58:19.633598 systemd[1]: sshd@3-172.31.30.27:22-147.75.109.163:41856.service: Deactivated successfully. Jul 2 08:58:19.641785 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:58:19.644036 systemd-logind[1993]: Removed session 4. Jul 2 08:58:19.665581 systemd[1]: Started sshd@4-172.31.30.27:22-147.75.109.163:41864.service - OpenSSH per-connection server daemon (147.75.109.163:41864). Jul 2 08:58:19.687004 amazon-ssm-agent[2132]: 2024-07-02 08:58:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 08:58:19.853889 sshd[2287]: Accepted publickey for core from 147.75.109.163 port 41864 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:19.856985 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:19.866517 systemd-logind[1993]: New session 5 of user core. Jul 2 08:58:19.872697 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 08:58:20.007853 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 08:58:20.008444 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:58:20.026494 sudo[2293]: pam_unix(sudo:session): session closed for user root Jul 2 08:58:20.049984 sshd[2287]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:20.055120 systemd[1]: sshd@4-172.31.30.27:22-147.75.109.163:41864.service: Deactivated successfully. Jul 2 08:58:20.058969 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:58:20.063746 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:58:20.065676 systemd-logind[1993]: Removed session 5. Jul 2 08:58:20.085925 systemd[1]: Started sshd@5-172.31.30.27:22-147.75.109.163:41880.service - OpenSSH per-connection server daemon (147.75.109.163:41880). Jul 2 08:58:20.264334 sshd[2298]: Accepted publickey for core from 147.75.109.163 port 41880 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:20.266924 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:20.274301 systemd-logind[1993]: New session 6 of user core. Jul 2 08:58:20.282695 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 08:58:20.386042 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 08:58:20.387074 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:58:20.393194 sudo[2302]: pam_unix(sudo:session): session closed for user root Jul 2 08:58:20.403213 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 08:58:20.403773 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:58:20.428923 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 08:58:20.432827 auditctl[2305]: No rules Jul 2 08:58:20.433395 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 08:58:20.433839 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 08:58:20.442141 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:58:20.497266 augenrules[2323]: No rules Jul 2 08:58:20.499618 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:58:20.502116 sudo[2301]: pam_unix(sudo:session): session closed for user root Jul 2 08:58:20.525047 sshd[2298]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:20.532467 systemd[1]: sshd@5-172.31.30.27:22-147.75.109.163:41880.service: Deactivated successfully. Jul 2 08:58:20.535399 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:58:20.536877 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:58:20.538906 systemd-logind[1993]: Removed session 6. Jul 2 08:58:20.563961 systemd[1]: Started sshd@6-172.31.30.27:22-147.75.109.163:41882.service - OpenSSH per-connection server daemon (147.75.109.163:41882). Jul 2 08:58:20.741763 sshd[2331]: Accepted publickey for core from 147.75.109.163 port 41882 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:20.744256 sshd[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:20.752677 systemd-logind[1993]: New session 7 of user core. Jul 2 08:58:20.759702 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 08:58:20.861766 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:58:20.862277 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:58:21.042930 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 08:58:21.056021 (dockerd)[2344]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 08:58:21.523981 dockerd[2344]: time="2024-07-02T08:58:21.523877091Z" level=info msg="Starting up" Jul 2 08:58:22.148066 dockerd[2344]: time="2024-07-02T08:58:22.147993854Z" level=info msg="Loading containers: start." Jul 2 08:58:22.334488 kernel: Initializing XFRM netlink socket Jul 2 08:58:22.389214 (udev-worker)[2358]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:58:22.483615 systemd-networkd[1850]: docker0: Link UP Jul 2 08:58:22.503078 dockerd[2344]: time="2024-07-02T08:58:22.503009572Z" level=info msg="Loading containers: done." Jul 2 08:58:22.631152 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3542081600-merged.mount: Deactivated successfully. Jul 2 08:58:22.655246 dockerd[2344]: time="2024-07-02T08:58:22.654299297Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:58:22.655246 dockerd[2344]: time="2024-07-02T08:58:22.654648689Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 08:58:22.655246 dockerd[2344]: time="2024-07-02T08:58:22.654854957Z" level=info msg="Daemon has completed initialization" Jul 2 08:58:22.717370 dockerd[2344]: time="2024-07-02T08:58:22.717289853Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:58:22.718141 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 08:58:22.941759 systemd-resolved[1807]: Clock change detected. Flushing caches. Jul 2 08:58:23.945437 containerd[2009]: time="2024-07-02T08:58:23.944975520Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 08:58:24.660843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684440032.mount: Deactivated successfully. Jul 2 08:58:26.525535 containerd[2009]: time="2024-07-02T08:58:26.525377137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:26.531542 containerd[2009]: time="2024-07-02T08:58:26.530660521Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=32256347" Jul 2 08:58:26.531542 containerd[2009]: time="2024-07-02T08:58:26.530907829Z" level=info msg="ImageCreate event name:\"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:26.541322 containerd[2009]: time="2024-07-02T08:58:26.541262593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:26.543674 containerd[2009]: time="2024-07-02T08:58:26.543597445Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"32253147\" in 2.598562609s" Jul 2 08:58:26.543674 containerd[2009]: time="2024-07-02T08:58:26.543667489Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\"" Jul 2 08:58:26.583583 containerd[2009]: time="2024-07-02T08:58:26.583527062Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 08:58:28.483414 containerd[2009]: time="2024-07-02T08:58:28.483337791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:28.485172 containerd[2009]: time="2024-07-02T08:58:28.485047863Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=29228084" Jul 2 08:58:28.486793 containerd[2009]: time="2024-07-02T08:58:28.486703143Z" level=info msg="ImageCreate event name:\"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:28.492446 containerd[2009]: time="2024-07-02T08:58:28.492388011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:28.495365 containerd[2009]: time="2024-07-02T08:58:28.494793051Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"30685210\" in 1.911202209s" Jul 2 08:58:28.495365 containerd[2009]: time="2024-07-02T08:58:28.494856603Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\"" Jul 2 08:58:28.538597 containerd[2009]: time="2024-07-02T08:58:28.538252659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 08:58:29.620295 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:58:29.631903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:29.786289 containerd[2009]: time="2024-07-02T08:58:29.784602773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:29.793137 containerd[2009]: time="2024-07-02T08:58:29.793045410Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=15578348" Jul 2 08:58:29.796529 containerd[2009]: time="2024-07-02T08:58:29.794749098Z" level=info msg="ImageCreate event name:\"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:29.807765 containerd[2009]: time="2024-07-02T08:58:29.807707106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:29.812272 containerd[2009]: time="2024-07-02T08:58:29.812211294Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"17035492\" in 1.273899355s" Jul 2 08:58:29.812474 containerd[2009]: time="2024-07-02T08:58:29.812445234Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\"" Jul 2 08:58:29.861768 containerd[2009]: time="2024-07-02T08:58:29.861720510Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 08:58:30.017835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:30.020868 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:58:30.103334 kubelet[2562]: E0702 08:58:30.103182 2562 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:58:30.111616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:58:30.112040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:58:31.245514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515789189.mount: Deactivated successfully. Jul 2 08:58:31.758997 containerd[2009]: time="2024-07-02T08:58:31.758940619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:31.761003 containerd[2009]: time="2024-07-02T08:58:31.760937347Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=25052710" Jul 2 08:58:31.762402 containerd[2009]: time="2024-07-02T08:58:31.762311407Z" level=info msg="ImageCreate event name:\"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:31.766509 containerd[2009]: time="2024-07-02T08:58:31.766392379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:31.768135 containerd[2009]: time="2024-07-02T08:58:31.767916967Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"25051729\" in 1.905965937s" Jul 2 08:58:31.768135 containerd[2009]: time="2024-07-02T08:58:31.767976571Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jul 2 08:58:31.808607 containerd[2009]: time="2024-07-02T08:58:31.808542104Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 08:58:32.637369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025756095.mount: Deactivated successfully. Jul 2 08:58:33.969967 containerd[2009]: time="2024-07-02T08:58:33.969884674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:33.972226 containerd[2009]: time="2024-07-02T08:58:33.972148102Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jul 2 08:58:33.973271 containerd[2009]: time="2024-07-02T08:58:33.973182286Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:33.979204 containerd[2009]: time="2024-07-02T08:58:33.979096990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:33.981741 containerd[2009]: time="2024-07-02T08:58:33.981555814Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.172950098s" Jul 2 08:58:33.981741 containerd[2009]: time="2024-07-02T08:58:33.981614278Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 08:58:34.020401 containerd[2009]: time="2024-07-02T08:58:34.020278075Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:58:34.534440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2700879942.mount: Deactivated successfully. Jul 2 08:58:34.542687 containerd[2009]: time="2024-07-02T08:58:34.542611953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:34.544229 containerd[2009]: time="2024-07-02T08:58:34.544178241Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jul 2 08:58:34.545774 containerd[2009]: time="2024-07-02T08:58:34.545686941Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:34.550515 containerd[2009]: time="2024-07-02T08:58:34.550442157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:34.552832 containerd[2009]: time="2024-07-02T08:58:34.552660225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 532.032746ms" Jul 2 08:58:34.552832 containerd[2009]: time="2024-07-02T08:58:34.552718173Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 08:58:34.596339 containerd[2009]: time="2024-07-02T08:58:34.596268249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 08:58:35.183934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990472656.mount: Deactivated successfully. Jul 2 08:58:38.334021 containerd[2009]: time="2024-07-02T08:58:38.333958872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:38.336909 containerd[2009]: time="2024-07-02T08:58:38.336835356Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jul 2 08:58:38.337835 containerd[2009]: time="2024-07-02T08:58:38.337748244Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:38.344052 containerd[2009]: time="2024-07-02T08:58:38.343962048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:38.347162 containerd[2009]: time="2024-07-02T08:58:38.346511412Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.750176887s" Jul 2 08:58:38.347162 containerd[2009]: time="2024-07-02T08:58:38.346577592Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 08:58:40.120071 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:58:40.129881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:40.660913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:40.669965 (kubelet)[2754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:58:40.778357 kubelet[2754]: E0702 08:58:40.778267 2754 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:58:40.783244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:58:40.784507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:58:44.393552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:44.406978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:44.447095 systemd[1]: Reloading requested from client PID 2769 ('systemctl') (unit session-7.scope)... Jul 2 08:58:44.447121 systemd[1]: Reloading... Jul 2 08:58:44.650581 zram_generator::config[2811]: No configuration found. Jul 2 08:58:44.891267 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:58:45.061848 systemd[1]: Reloading finished in 613 ms. Jul 2 08:58:45.145994 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 08:58:45.146623 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 08:58:45.147142 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:45.154100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:45.643373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:45.668247 (kubelet)[2871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:58:45.748568 kubelet[2871]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:58:45.748568 kubelet[2871]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:58:45.748568 kubelet[2871]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:58:45.750365 kubelet[2871]: I0702 08:58:45.750259 2871 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:58:47.028614 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 08:58:47.433534 kubelet[2871]: I0702 08:58:47.432923 2871 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 08:58:47.433534 kubelet[2871]: I0702 08:58:47.432970 2871 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:58:47.433534 kubelet[2871]: I0702 08:58:47.433291 2871 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 08:58:47.462070 kubelet[2871]: I0702 08:58:47.461846 2871 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:58:47.462469 kubelet[2871]: E0702 08:58:47.462444 2871 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:47.474745 kubelet[2871]: I0702 08:58:47.474701 2871 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:58:47.477090 kubelet[2871]: I0702 08:58:47.477032 2871 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:58:47.477425 kubelet[2871]: I0702 08:58:47.477364 2871 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:58:47.477633 kubelet[2871]: I0702 08:58:47.477431 2871 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:58:47.477633 kubelet[2871]: I0702 08:58:47.477461 2871 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:58:47.477742 kubelet[2871]: I0702 08:58:47.477660 2871 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:58:47.482198 kubelet[2871]: I0702 08:58:47.482147 2871 kubelet.go:396] "Attempting to sync node with API server" Jul 2 08:58:47.482198 kubelet[2871]: I0702 08:58:47.482202 2871 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:58:47.483855 kubelet[2871]: I0702 08:58:47.482246 2871 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:58:47.483855 kubelet[2871]: I0702 08:58:47.482278 2871 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:58:47.487150 kubelet[2871]: W0702 08:58:47.487057 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:47.487150 kubelet[2871]: E0702 08:58:47.487159 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:47.489703 kubelet[2871]: I0702 08:58:47.489661 2871 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:58:47.490427 kubelet[2871]: I0702 08:58:47.490393 2871 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:58:47.491719 kubelet[2871]: W0702 08:58:47.491680 2871 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:58:47.494653 kubelet[2871]: I0702 08:58:47.494613 2871 server.go:1256] "Started kubelet" Jul 2 08:58:47.502278 kubelet[2871]: I0702 08:58:47.502232 2871 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:58:47.503919 kubelet[2871]: I0702 08:58:47.503878 2871 server.go:461] "Adding debug handlers to kubelet server" Jul 2 08:58:47.506542 kubelet[2871]: I0702 08:58:47.506449 2871 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:58:47.507144 kubelet[2871]: I0702 08:58:47.507106 2871 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:58:47.510136 kubelet[2871]: E0702 08:58:47.510082 2871 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.27:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.27:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-27.17de59b1b5b29269 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-27,UID:ip-172-31-30-27,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-27,},FirstTimestamp:2024-07-02 08:58:47.494570601 +0000 UTC m=+1.819475710,LastTimestamp:2024-07-02 08:58:47.494570601 +0000 UTC m=+1.819475710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-27,}" Jul 2 08:58:47.510832 kubelet[2871]: I0702 08:58:47.510581 2871 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:58:47.512223 kubelet[2871]: W0702 08:58:47.512029 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-27&limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:47.512223 kubelet[2871]: E0702 08:58:47.512123 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-27&limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:47.520327 kubelet[2871]: E0702 08:58:47.520168 2871 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:58:47.521375 kubelet[2871]: I0702 08:58:47.520675 2871 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:58:47.521375 kubelet[2871]: I0702 08:58:47.521344 2871 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:58:47.523434 kubelet[2871]: I0702 08:58:47.522373 2871 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:58:47.523434 kubelet[2871]: W0702 08:58:47.523023 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:47.523434 kubelet[2871]: E0702 08:58:47.523095 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:47.523434 kubelet[2871]: E0702 08:58:47.523230 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-27?timeout=10s\": dial tcp 172.31.30.27:6443: connect: connection refused" interval="200ms" Jul 2 08:58:47.531109 kubelet[2871]: I0702 08:58:47.529692 2871 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:58:47.531109 kubelet[2871]: I0702 08:58:47.529724 2871 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:58:47.531109 kubelet[2871]: I0702 08:58:47.529872 2871 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:58:47.554458 kubelet[2871]: I0702 08:58:47.554374 2871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:58:47.559649 kubelet[2871]: I0702 08:58:47.559598 2871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:58:47.559649 kubelet[2871]: I0702 08:58:47.559642 2871 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:58:47.559834 kubelet[2871]: I0702 08:58:47.559694 2871 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 08:58:47.559834 kubelet[2871]: E0702 08:58:47.559789 2871 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:58:47.562907 kubelet[2871]: W0702 08:58:47.562829 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:47.563136 kubelet[2871]: E0702 08:58:47.563113 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:47.567667 kubelet[2871]: I0702 08:58:47.567616 2871 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:58:47.567667 kubelet[2871]: I0702 08:58:47.567661 2871 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:58:47.567862 kubelet[2871]: I0702 08:58:47.567690 2871 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:58:47.581597 kubelet[2871]: I0702 08:58:47.581536 2871 policy_none.go:49] "None policy: Start" Jul 2 08:58:47.582980 kubelet[2871]: I0702 08:58:47.582857 2871 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:58:47.582980 kubelet[2871]: I0702 08:58:47.582925 2871 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:58:47.593822 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 08:58:47.606009 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 08:58:47.612916 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 08:58:47.624540 kubelet[2871]: I0702 08:58:47.624295 2871 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:58:47.625240 kubelet[2871]: I0702 08:58:47.625209 2871 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-27" Jul 2 08:58:47.626161 kubelet[2871]: I0702 08:58:47.626134 2871 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:58:47.627289 kubelet[2871]: E0702 08:58:47.626984 2871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.27:6443/api/v1/nodes\": dial tcp 172.31.30.27:6443: connect: connection refused" node="ip-172-31-30-27" Jul 2 08:58:47.630193 kubelet[2871]: E0702 08:58:47.630055 2871 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-27\" not found" Jul 2 08:58:47.660944 kubelet[2871]: I0702 08:58:47.660883 2871 topology_manager.go:215] "Topology Admit Handler" podUID="d7a23e7a14ac95eb2a20f87cdbe8d7eb" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-27" Jul 2 08:58:47.663042 kubelet[2871]: I0702 08:58:47.663000 2871 topology_manager.go:215] "Topology Admit Handler" podUID="278a5b58bc069d1f23770c9a7a1ad82c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:47.665343 kubelet[2871]: I0702 08:58:47.665020 2871 topology_manager.go:215] "Topology Admit Handler" podUID="bca169b650dee8e27144ec2c93d6e0d9" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-27" Jul 2 08:58:47.678728 systemd[1]: Created slice kubepods-burstable-podd7a23e7a14ac95eb2a20f87cdbe8d7eb.slice - libcontainer container kubepods-burstable-podd7a23e7a14ac95eb2a20f87cdbe8d7eb.slice. Jul 2 08:58:47.707548 systemd[1]: Created slice kubepods-burstable-pod278a5b58bc069d1f23770c9a7a1ad82c.slice - libcontainer container kubepods-burstable-pod278a5b58bc069d1f23770c9a7a1ad82c.slice. Jul 2 08:58:47.724689 kubelet[2871]: I0702 08:58:47.724520 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/278a5b58bc069d1f23770c9a7a1ad82c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-27\" (UID: \"278a5b58bc069d1f23770c9a7a1ad82c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:47.724689 kubelet[2871]: I0702 08:58:47.724590 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7a23e7a14ac95eb2a20f87cdbe8d7eb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-27\" (UID: \"d7a23e7a14ac95eb2a20f87cdbe8d7eb\") " pod="kube-system/kube-apiserver-ip-172-31-30-27" Jul 2 08:58:47.724689 kubelet[2871]: I0702 08:58:47.724638 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/278a5b58bc069d1f23770c9a7a1ad82c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-27\" (UID: \"278a5b58bc069d1f23770c9a7a1ad82c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:47.724689 kubelet[2871]: I0702 08:58:47.724696 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7a23e7a14ac95eb2a20f87cdbe8d7eb-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-27\" (UID: \"d7a23e7a14ac95eb2a20f87cdbe8d7eb\") " pod="kube-system/kube-apiserver-ip-172-31-30-27" Jul 2 08:58:47.724982 kubelet[2871]: I0702 08:58:47.724741 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/278a5b58bc069d1f23770c9a7a1ad82c-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-27\" (UID: \"278a5b58bc069d1f23770c9a7a1ad82c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:47.724982 kubelet[2871]: I0702 08:58:47.724783 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/278a5b58bc069d1f23770c9a7a1ad82c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-27\" (UID: \"278a5b58bc069d1f23770c9a7a1ad82c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:47.724982 kubelet[2871]: I0702 08:58:47.724833 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/278a5b58bc069d1f23770c9a7a1ad82c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-27\" (UID: \"278a5b58bc069d1f23770c9a7a1ad82c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:47.724982 kubelet[2871]: I0702 08:58:47.724877 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bca169b650dee8e27144ec2c93d6e0d9-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-27\" (UID: \"bca169b650dee8e27144ec2c93d6e0d9\") " pod="kube-system/kube-scheduler-ip-172-31-30-27" Jul 2 08:58:47.724982 kubelet[2871]: I0702 08:58:47.724931 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7a23e7a14ac95eb2a20f87cdbe8d7eb-ca-certs\") pod \"kube-apiserver-ip-172-31-30-27\" (UID: \"d7a23e7a14ac95eb2a20f87cdbe8d7eb\") " pod="kube-system/kube-apiserver-ip-172-31-30-27" Jul 2 08:58:47.726543 kubelet[2871]: E0702 08:58:47.725589 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-27?timeout=10s\": dial tcp 172.31.30.27:6443: connect: connection refused" interval="400ms" Jul 2 08:58:47.729573 systemd[1]: Created slice kubepods-burstable-podbca169b650dee8e27144ec2c93d6e0d9.slice - libcontainer container kubepods-burstable-podbca169b650dee8e27144ec2c93d6e0d9.slice. Jul 2 08:58:47.829399 kubelet[2871]: I0702 08:58:47.828890 2871 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-27" Jul 2 08:58:47.829399 kubelet[2871]: E0702 08:58:47.829351 2871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.27:6443/api/v1/nodes\": dial tcp 172.31.30.27:6443: connect: connection refused" node="ip-172-31-30-27" Jul 2 08:58:48.001806 containerd[2009]: time="2024-07-02T08:58:48.001642028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-27,Uid:d7a23e7a14ac95eb2a20f87cdbe8d7eb,Namespace:kube-system,Attempt:0,}" Jul 2 08:58:48.024108 containerd[2009]: time="2024-07-02T08:58:48.023893904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-27,Uid:278a5b58bc069d1f23770c9a7a1ad82c,Namespace:kube-system,Attempt:0,}" Jul 2 08:58:48.035234 containerd[2009]: time="2024-07-02T08:58:48.034749116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-27,Uid:bca169b650dee8e27144ec2c93d6e0d9,Namespace:kube-system,Attempt:0,}" Jul 2 08:58:48.127159 kubelet[2871]: E0702 08:58:48.127108 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-27?timeout=10s\": dial tcp 172.31.30.27:6443: connect: connection refused" interval="800ms" Jul 2 08:58:48.231702 kubelet[2871]: I0702 08:58:48.231445 2871 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-27" Jul 2 08:58:48.232069 kubelet[2871]: E0702 08:58:48.231982 2871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.27:6443/api/v1/nodes\": dial tcp 172.31.30.27:6443: connect: connection refused" node="ip-172-31-30-27" Jul 2 08:58:48.600013 kubelet[2871]: W0702 08:58:48.599946 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-27&limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:48.600013 kubelet[2871]: E0702 08:58:48.600016 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-27&limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:48.889740 kubelet[2871]: W0702 08:58:48.889676 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:48.889879 kubelet[2871]: E0702 08:58:48.889771 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:48.905309 kubelet[2871]: W0702 08:58:48.905255 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:48.905449 kubelet[2871]: E0702 08:58:48.905365 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:48.913852 kubelet[2871]: W0702 08:58:48.913756 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:48.913852 kubelet[2871]: E0702 08:58:48.913820 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:48.927723 kubelet[2871]: E0702 08:58:48.927675 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-27?timeout=10s\": dial tcp 172.31.30.27:6443: connect: connection refused" interval="1.6s" Jul 2 08:58:48.999088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930956877.mount: Deactivated successfully. Jul 2 08:58:49.010105 containerd[2009]: time="2024-07-02T08:58:49.010035837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:58:49.011952 containerd[2009]: time="2024-07-02T08:58:49.011873637Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:58:49.013822 containerd[2009]: time="2024-07-02T08:58:49.013743693Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:58:49.014924 containerd[2009]: time="2024-07-02T08:58:49.014869905Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 2 08:58:49.016663 containerd[2009]: time="2024-07-02T08:58:49.016612317Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:58:49.018451 containerd[2009]: time="2024-07-02T08:58:49.018366837Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:58:49.019428 containerd[2009]: time="2024-07-02T08:58:49.019010997Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:58:49.026633 containerd[2009]: time="2024-07-02T08:58:49.026562801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:58:49.028931 containerd[2009]: time="2024-07-02T08:58:49.028611921Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.004563349s" Jul 2 08:58:49.033261 containerd[2009]: time="2024-07-02T08:58:49.033201609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 998.313653ms" Jul 2 08:58:49.035038 kubelet[2871]: I0702 08:58:49.034984 2871 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-27" Jul 2 08:58:49.035792 kubelet[2871]: E0702 08:58:49.035477 2871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.27:6443/api/v1/nodes\": dial tcp 172.31.30.27:6443: connect: connection refused" node="ip-172-31-30-27" Jul 2 08:58:49.047446 containerd[2009]: time="2024-07-02T08:58:49.047368233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.045584569s" Jul 2 08:58:49.345344 containerd[2009]: time="2024-07-02T08:58:49.343934243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:49.345344 containerd[2009]: time="2024-07-02T08:58:49.344038643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:49.345344 containerd[2009]: time="2024-07-02T08:58:49.344105615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:49.345344 containerd[2009]: time="2024-07-02T08:58:49.344140859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:49.349155 containerd[2009]: time="2024-07-02T08:58:49.348662003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:49.349155 containerd[2009]: time="2024-07-02T08:58:49.348765143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:49.349155 containerd[2009]: time="2024-07-02T08:58:49.348807239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:49.349155 containerd[2009]: time="2024-07-02T08:58:49.348841619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:49.356417 containerd[2009]: time="2024-07-02T08:58:49.355860467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:58:49.356417 containerd[2009]: time="2024-07-02T08:58:49.355958183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:49.356417 containerd[2009]: time="2024-07-02T08:58:49.356006243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:58:49.356417 containerd[2009]: time="2024-07-02T08:58:49.356040167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:58:49.390475 systemd[1]: Started cri-containerd-56db0026f7a3cfaed6ddfda52075f5a83d1a4c205b4aabd5a62702c9c5d7c0ae.scope - libcontainer container 56db0026f7a3cfaed6ddfda52075f5a83d1a4c205b4aabd5a62702c9c5d7c0ae. Jul 2 08:58:49.420645 systemd[1]: Started cri-containerd-4b9fb98d651fe162b59cbc038a96cd08944934ac230d4d31c8289d3d0d53cf40.scope - libcontainer container 4b9fb98d651fe162b59cbc038a96cd08944934ac230d4d31c8289d3d0d53cf40. Jul 2 08:58:49.431099 systemd[1]: Started cri-containerd-66780602eed93f8054bc9d452389dcd1bb64110c7a9cf9990c0295312ab2a4ec.scope - libcontainer container 66780602eed93f8054bc9d452389dcd1bb64110c7a9cf9990c0295312ab2a4ec. Jul 2 08:58:49.521453 containerd[2009]: time="2024-07-02T08:58:49.520903668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-27,Uid:d7a23e7a14ac95eb2a20f87cdbe8d7eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"56db0026f7a3cfaed6ddfda52075f5a83d1a4c205b4aabd5a62702c9c5d7c0ae\"" Jul 2 08:58:49.544127 containerd[2009]: time="2024-07-02T08:58:49.544070172Z" level=info msg="CreateContainer within sandbox \"56db0026f7a3cfaed6ddfda52075f5a83d1a4c205b4aabd5a62702c9c5d7c0ae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:58:49.551419 containerd[2009]: time="2024-07-02T08:58:49.551359548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-27,Uid:278a5b58bc069d1f23770c9a7a1ad82c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b9fb98d651fe162b59cbc038a96cd08944934ac230d4d31c8289d3d0d53cf40\"" Jul 2 08:58:49.559043 containerd[2009]: time="2024-07-02T08:58:49.558978036Z" level=info msg="CreateContainer within sandbox \"4b9fb98d651fe162b59cbc038a96cd08944934ac230d4d31c8289d3d0d53cf40\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:58:49.566141 containerd[2009]: time="2024-07-02T08:58:49.566085228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-27,Uid:bca169b650dee8e27144ec2c93d6e0d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"66780602eed93f8054bc9d452389dcd1bb64110c7a9cf9990c0295312ab2a4ec\"" Jul 2 08:58:49.588797 containerd[2009]: time="2024-07-02T08:58:49.588547416Z" level=info msg="CreateContainer within sandbox \"66780602eed93f8054bc9d452389dcd1bb64110c7a9cf9990c0295312ab2a4ec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:58:49.603253 containerd[2009]: time="2024-07-02T08:58:49.602736984Z" level=info msg="CreateContainer within sandbox \"56db0026f7a3cfaed6ddfda52075f5a83d1a4c205b4aabd5a62702c9c5d7c0ae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a7eb792ead57731680436fff42fafb03728613a763711204c028f762b9e83e53\"" Jul 2 08:58:49.604536 containerd[2009]: time="2024-07-02T08:58:49.604268844Z" level=info msg="StartContainer for \"a7eb792ead57731680436fff42fafb03728613a763711204c028f762b9e83e53\"" Jul 2 08:58:49.620437 containerd[2009]: time="2024-07-02T08:58:49.620338152Z" level=info msg="CreateContainer within sandbox \"4b9fb98d651fe162b59cbc038a96cd08944934ac230d4d31c8289d3d0d53cf40\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"114c9f238211104a163f9e53dadf5c1ebd57e369d513a1c76715e1943b8afde6\"" Jul 2 08:58:49.621593 containerd[2009]: time="2024-07-02T08:58:49.621088668Z" level=info msg="StartContainer for \"114c9f238211104a163f9e53dadf5c1ebd57e369d513a1c76715e1943b8afde6\"" Jul 2 08:58:49.627605 containerd[2009]: time="2024-07-02T08:58:49.627251244Z" level=info msg="CreateContainer within sandbox \"66780602eed93f8054bc9d452389dcd1bb64110c7a9cf9990c0295312ab2a4ec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5a86384925f988f4bafa62040ef759bd9fb1f3cfb16a7282851dc064230b1d98\"" Jul 2 08:58:49.629120 containerd[2009]: time="2024-07-02T08:58:49.628966464Z" level=info msg="StartContainer for \"5a86384925f988f4bafa62040ef759bd9fb1f3cfb16a7282851dc064230b1d98\"" Jul 2 08:58:49.639622 kubelet[2871]: E0702 08:58:49.638064 2871 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.27:6443: connect: connection refused Jul 2 08:58:49.678807 systemd[1]: Started cri-containerd-a7eb792ead57731680436fff42fafb03728613a763711204c028f762b9e83e53.scope - libcontainer container a7eb792ead57731680436fff42fafb03728613a763711204c028f762b9e83e53. Jul 2 08:58:49.705832 systemd[1]: Started cri-containerd-114c9f238211104a163f9e53dadf5c1ebd57e369d513a1c76715e1943b8afde6.scope - libcontainer container 114c9f238211104a163f9e53dadf5c1ebd57e369d513a1c76715e1943b8afde6. Jul 2 08:58:49.714055 systemd[1]: Started cri-containerd-5a86384925f988f4bafa62040ef759bd9fb1f3cfb16a7282851dc064230b1d98.scope - libcontainer container 5a86384925f988f4bafa62040ef759bd9fb1f3cfb16a7282851dc064230b1d98. Jul 2 08:58:49.828764 containerd[2009]: time="2024-07-02T08:58:49.828673189Z" level=info msg="StartContainer for \"a7eb792ead57731680436fff42fafb03728613a763711204c028f762b9e83e53\" returns successfully" Jul 2 08:58:49.844818 containerd[2009]: time="2024-07-02T08:58:49.844607677Z" level=info msg="StartContainer for \"5a86384925f988f4bafa62040ef759bd9fb1f3cfb16a7282851dc064230b1d98\" returns successfully" Jul 2 08:58:49.889875 containerd[2009]: time="2024-07-02T08:58:49.889710493Z" level=info msg="StartContainer for \"114c9f238211104a163f9e53dadf5c1ebd57e369d513a1c76715e1943b8afde6\" returns successfully" Jul 2 08:58:50.639396 kubelet[2871]: I0702 08:58:50.638444 2871 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-27" Jul 2 08:58:54.131657 kubelet[2871]: E0702 08:58:54.131589 2871 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-27\" not found" node="ip-172-31-30-27" Jul 2 08:58:54.143193 kubelet[2871]: I0702 08:58:54.142927 2871 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-27" Jul 2 08:58:54.193680 kubelet[2871]: E0702 08:58:54.193351 2871 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-30-27.17de59b1b5b29269 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-27,UID:ip-172-31-30-27,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-27,},FirstTimestamp:2024-07-02 08:58:47.494570601 +0000 UTC m=+1.819475710,LastTimestamp:2024-07-02 08:58:47.494570601 +0000 UTC m=+1.819475710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-27,}" Jul 2 08:58:54.490749 kubelet[2871]: I0702 08:58:54.490167 2871 apiserver.go:52] "Watching apiserver" Jul 2 08:58:54.523527 kubelet[2871]: I0702 08:58:54.523413 2871 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:58:56.870590 systemd[1]: Reloading requested from client PID 3154 ('systemctl') (unit session-7.scope)... Jul 2 08:58:56.871114 systemd[1]: Reloading... Jul 2 08:58:57.048620 zram_generator::config[3195]: No configuration found. Jul 2 08:58:57.267390 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:58:57.470883 systemd[1]: Reloading finished in 599 ms. Jul 2 08:58:57.549683 kubelet[2871]: I0702 08:58:57.548790 2871 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:58:57.549266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:57.562019 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:58:57.562466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:57.562584 systemd[1]: kubelet.service: Consumed 2.527s CPU time, 115.2M memory peak, 0B memory swap peak. Jul 2 08:58:57.573007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:57.920845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:57.932110 (kubelet)[3252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:58:58.056387 kubelet[3252]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:58:58.056387 kubelet[3252]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:58:58.056387 kubelet[3252]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:58:58.056387 kubelet[3252]: I0702 08:58:58.055860 3252 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:58:58.066243 kubelet[3252]: I0702 08:58:58.065901 3252 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 08:58:58.066243 kubelet[3252]: I0702 08:58:58.065945 3252 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:58:58.066559 kubelet[3252]: I0702 08:58:58.066304 3252 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 08:58:58.069573 kubelet[3252]: I0702 08:58:58.069525 3252 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:58:58.072942 kubelet[3252]: I0702 08:58:58.072886 3252 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:58:58.092524 kubelet[3252]: I0702 08:58:58.089900 3252 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:58:58.092524 kubelet[3252]: I0702 08:58:58.090330 3252 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:58:58.092524 kubelet[3252]: I0702 08:58:58.090665 3252 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:58:58.092524 kubelet[3252]: I0702 08:58:58.090705 3252 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:58:58.092524 kubelet[3252]: I0702 08:58:58.090724 3252 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:58:58.092524 kubelet[3252]: I0702 08:58:58.090785 3252 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:58:58.093028 kubelet[3252]: I0702 08:58:58.090956 3252 kubelet.go:396] "Attempting to sync node with API server" Jul 2 08:58:58.093028 kubelet[3252]: I0702 08:58:58.090984 3252 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:58:58.093028 kubelet[3252]: I0702 08:58:58.091022 3252 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:58:58.093028 kubelet[3252]: I0702 08:58:58.091044 3252 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:58:58.093028 kubelet[3252]: I0702 08:58:58.092803 3252 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:58:58.093391 kubelet[3252]: I0702 08:58:58.093302 3252 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:58:58.094054 kubelet[3252]: I0702 08:58:58.094008 3252 server.go:1256] "Started kubelet" Jul 2 08:58:58.104852 kubelet[3252]: I0702 08:58:58.103909 3252 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:58:58.120521 kubelet[3252]: I0702 08:58:58.119681 3252 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:58:58.123692 kubelet[3252]: I0702 08:58:58.123641 3252 server.go:461] "Adding debug handlers to kubelet server" Jul 2 08:58:58.132121 kubelet[3252]: I0702 08:58:58.131613 3252 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:58:58.132121 kubelet[3252]: I0702 08:58:58.132078 3252 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:58:58.139986 kubelet[3252]: I0702 08:58:58.139929 3252 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:58:58.143595 kubelet[3252]: I0702 08:58:58.143533 3252 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:58:58.152539 kubelet[3252]: I0702 08:58:58.152336 3252 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:58:58.164769 kubelet[3252]: I0702 08:58:58.164732 3252 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:58:58.165517 kubelet[3252]: I0702 08:58:58.165064 3252 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:58:58.175646 kubelet[3252]: I0702 08:58:58.174457 3252 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:58:58.183640 kubelet[3252]: I0702 08:58:58.182765 3252 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:58:58.183640 kubelet[3252]: I0702 08:58:58.182808 3252 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:58:58.183640 kubelet[3252]: I0702 08:58:58.182839 3252 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 08:58:58.183877 kubelet[3252]: E0702 08:58:58.183663 3252 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:58:58.216676 kubelet[3252]: E0702 08:58:58.216639 3252 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:58:58.226436 kubelet[3252]: I0702 08:58:58.226274 3252 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:58:58.246826 kubelet[3252]: E0702 08:58:58.246782 3252 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jul 2 08:58:58.251274 kubelet[3252]: I0702 08:58:58.251216 3252 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-27" Jul 2 08:58:58.275964 kubelet[3252]: I0702 08:58:58.275912 3252 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-30-27" Jul 2 08:58:58.276127 kubelet[3252]: I0702 08:58:58.276041 3252 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-27" Jul 2 08:58:58.285274 kubelet[3252]: E0702 08:58:58.284998 3252 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 08:58:58.348665 kubelet[3252]: I0702 08:58:58.348088 3252 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:58:58.348665 kubelet[3252]: I0702 08:58:58.348180 3252 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:58:58.348665 kubelet[3252]: I0702 08:58:58.348213 3252 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:58:58.348665 kubelet[3252]: I0702 08:58:58.348443 3252 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:58:58.348665 kubelet[3252]: I0702 08:58:58.348573 3252 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:58:58.348665 kubelet[3252]: I0702 08:58:58.348593 3252 policy_none.go:49] "None policy: Start" Jul 2 08:58:58.351197 kubelet[3252]: I0702 08:58:58.351163 3252 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:58:58.351672 kubelet[3252]: I0702 08:58:58.351406 3252 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:58:58.351954 kubelet[3252]: I0702 08:58:58.351931 3252 state_mem.go:75] "Updated machine memory state" Jul 2 08:58:58.364310 kubelet[3252]: I0702 08:58:58.364272 3252 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:58:58.368255 kubelet[3252]: I0702 08:58:58.367905 3252 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:58:58.487069 kubelet[3252]: I0702 08:58:58.486924 3252 topology_manager.go:215] "Topology Admit Handler" podUID="278a5b58bc069d1f23770c9a7a1ad82c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:58.487703 kubelet[3252]: I0702 08:58:58.487341 3252 topology_manager.go:215] "Topology Admit Handler" podUID="bca169b650dee8e27144ec2c93d6e0d9" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-27" Jul 2 08:58:58.487703 kubelet[3252]: I0702 08:58:58.487445 3252 topology_manager.go:215] "Topology Admit Handler" podUID="d7a23e7a14ac95eb2a20f87cdbe8d7eb" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-27" Jul 2 08:58:58.497760 kubelet[3252]: E0702 08:58:58.497721 3252 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-30-27\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-27" Jul 2 08:58:58.501451 kubelet[3252]: E0702 08:58:58.501365 3252 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-27\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-27" Jul 2 08:58:58.555201 kubelet[3252]: I0702 08:58:58.554726 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/278a5b58bc069d1f23770c9a7a1ad82c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-27\" (UID: \"278a5b58bc069d1f23770c9a7a1ad82c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:58.555201 kubelet[3252]: I0702 08:58:58.554795 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7a23e7a14ac95eb2a20f87cdbe8d7eb-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-27\" (UID: \"d7a23e7a14ac95eb2a20f87cdbe8d7eb\") " pod="kube-system/kube-apiserver-ip-172-31-30-27" Jul 2 08:58:58.555201 kubelet[3252]: I0702 08:58:58.554843 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/278a5b58bc069d1f23770c9a7a1ad82c-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-27\" (UID: \"278a5b58bc069d1f23770c9a7a1ad82c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:58.555201 kubelet[3252]: I0702 08:58:58.554888 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/278a5b58bc069d1f23770c9a7a1ad82c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-27\" (UID: \"278a5b58bc069d1f23770c9a7a1ad82c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:58.555201 kubelet[3252]: I0702 08:58:58.554931 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/278a5b58bc069d1f23770c9a7a1ad82c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-27\" (UID: \"278a5b58bc069d1f23770c9a7a1ad82c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:58.555597 kubelet[3252]: I0702 08:58:58.554978 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/278a5b58bc069d1f23770c9a7a1ad82c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-27\" (UID: \"278a5b58bc069d1f23770c9a7a1ad82c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:58.555597 kubelet[3252]: I0702 08:58:58.555020 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bca169b650dee8e27144ec2c93d6e0d9-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-27\" (UID: \"bca169b650dee8e27144ec2c93d6e0d9\") " pod="kube-system/kube-scheduler-ip-172-31-30-27" Jul 2 08:58:58.555597 kubelet[3252]: I0702 08:58:58.555061 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7a23e7a14ac95eb2a20f87cdbe8d7eb-ca-certs\") pod \"kube-apiserver-ip-172-31-30-27\" (UID: \"d7a23e7a14ac95eb2a20f87cdbe8d7eb\") " pod="kube-system/kube-apiserver-ip-172-31-30-27" Jul 2 08:58:58.555597 kubelet[3252]: I0702 08:58:58.555110 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7a23e7a14ac95eb2a20f87cdbe8d7eb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-27\" (UID: \"d7a23e7a14ac95eb2a20f87cdbe8d7eb\") " pod="kube-system/kube-apiserver-ip-172-31-30-27" Jul 2 08:58:59.109919 kubelet[3252]: I0702 08:58:59.109840 3252 apiserver.go:52] "Watching apiserver" Jul 2 08:58:59.152997 kubelet[3252]: I0702 08:58:59.152900 3252 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:58:59.407971 kubelet[3252]: E0702 08:58:59.407888 3252 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-27\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-27" Jul 2 08:58:59.416133 kubelet[3252]: I0702 08:58:59.416080 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-27" podStartSLOduration=1.415998141 podStartE2EDuration="1.415998141s" podCreationTimestamp="2024-07-02 08:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:59.415533117 +0000 UTC m=+1.470877101" watchObservedRunningTime="2024-07-02 08:58:59.415998141 +0000 UTC m=+1.471342125" Jul 2 08:58:59.490299 kubelet[3252]: I0702 08:58:59.490025 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-27" podStartSLOduration=2.489967857 podStartE2EDuration="2.489967857s" podCreationTimestamp="2024-07-02 08:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:59.456111441 +0000 UTC m=+1.511455437" watchObservedRunningTime="2024-07-02 08:58:59.489967857 +0000 UTC m=+1.545311841" Jul 2 08:58:59.531980 kubelet[3252]: I0702 08:58:59.531896 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-27" podStartSLOduration=4.531841977 podStartE2EDuration="4.531841977s" podCreationTimestamp="2024-07-02 08:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:58:59.490981581 +0000 UTC m=+1.546325577" watchObservedRunningTime="2024-07-02 08:58:59.531841977 +0000 UTC m=+1.587185949" Jul 2 08:59:00.817194 update_engine[1995]: I0702 08:59:00.817126 1995 update_attempter.cc:509] Updating boot flags... Jul 2 08:59:00.973538 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3305) Jul 2 08:59:01.479529 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3296) Jul 2 08:59:05.098160 sudo[2334]: pam_unix(sudo:session): session closed for user root Jul 2 08:59:05.121821 sshd[2331]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:05.129098 systemd[1]: sshd@6-172.31.30.27:22-147.75.109.163:41882.service: Deactivated successfully. Jul 2 08:59:05.133589 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:59:05.134139 systemd[1]: session-7.scope: Consumed 9.351s CPU time, 132.1M memory peak, 0B memory swap peak. Jul 2 08:59:05.135386 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:59:05.138330 systemd-logind[1993]: Removed session 7. Jul 2 08:59:10.296262 kubelet[3252]: I0702 08:59:10.296213 3252 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:59:10.297731 containerd[2009]: time="2024-07-02T08:59:10.297563215Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:59:10.299001 kubelet[3252]: I0702 08:59:10.298957 3252 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:59:11.114584 kubelet[3252]: I0702 08:59:11.114522 3252 topology_manager.go:215] "Topology Admit Handler" podUID="f522642c-8a7d-428e-ae0c-915e82f98f49" podNamespace="kube-system" podName="kube-proxy-bfx6x" Jul 2 08:59:11.135004 systemd[1]: Created slice kubepods-besteffort-podf522642c_8a7d_428e_ae0c_915e82f98f49.slice - libcontainer container kubepods-besteffort-podf522642c_8a7d_428e_ae0c_915e82f98f49.slice. Jul 2 08:59:11.147341 kubelet[3252]: I0702 08:59:11.146837 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f522642c-8a7d-428e-ae0c-915e82f98f49-kube-proxy\") pod \"kube-proxy-bfx6x\" (UID: \"f522642c-8a7d-428e-ae0c-915e82f98f49\") " pod="kube-system/kube-proxy-bfx6x" Jul 2 08:59:11.147341 kubelet[3252]: I0702 08:59:11.146913 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f522642c-8a7d-428e-ae0c-915e82f98f49-lib-modules\") pod \"kube-proxy-bfx6x\" (UID: \"f522642c-8a7d-428e-ae0c-915e82f98f49\") " pod="kube-system/kube-proxy-bfx6x" Jul 2 08:59:11.147341 kubelet[3252]: I0702 08:59:11.146963 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8v6z\" (UniqueName: \"kubernetes.io/projected/f522642c-8a7d-428e-ae0c-915e82f98f49-kube-api-access-k8v6z\") pod \"kube-proxy-bfx6x\" (UID: \"f522642c-8a7d-428e-ae0c-915e82f98f49\") " pod="kube-system/kube-proxy-bfx6x" Jul 2 08:59:11.147341 kubelet[3252]: I0702 08:59:11.147011 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f522642c-8a7d-428e-ae0c-915e82f98f49-xtables-lock\") pod \"kube-proxy-bfx6x\" (UID: \"f522642c-8a7d-428e-ae0c-915e82f98f49\") " pod="kube-system/kube-proxy-bfx6x" Jul 2 08:59:11.411012 kubelet[3252]: I0702 08:59:11.410945 3252 topology_manager.go:215] "Topology Admit Handler" podUID="f787c5a8-6438-4c40-9246-d8be820451a0" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-wgkxv" Jul 2 08:59:11.421786 kubelet[3252]: W0702 08:59:11.420600 3252 reflector.go:539] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-30-27" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-30-27' and this object Jul 2 08:59:11.421786 kubelet[3252]: E0702 08:59:11.420670 3252 reflector.go:147] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-30-27" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-30-27' and this object Jul 2 08:59:11.421786 kubelet[3252]: W0702 08:59:11.420749 3252 reflector.go:539] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-27" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-30-27' and this object Jul 2 08:59:11.421786 kubelet[3252]: E0702 08:59:11.420774 3252 reflector.go:147] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-27" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-30-27' and this object Jul 2 08:59:11.433014 systemd[1]: Created slice kubepods-besteffort-podf787c5a8_6438_4c40_9246_d8be820451a0.slice - libcontainer container kubepods-besteffort-podf787c5a8_6438_4c40_9246_d8be820451a0.slice. Jul 2 08:59:11.448921 kubelet[3252]: I0702 08:59:11.448825 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vtrt\" (UniqueName: \"kubernetes.io/projected/f787c5a8-6438-4c40-9246-d8be820451a0-kube-api-access-8vtrt\") pod \"tigera-operator-76c4974c85-wgkxv\" (UID: \"f787c5a8-6438-4c40-9246-d8be820451a0\") " pod="tigera-operator/tigera-operator-76c4974c85-wgkxv" Jul 2 08:59:11.449094 kubelet[3252]: I0702 08:59:11.448964 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f787c5a8-6438-4c40-9246-d8be820451a0-var-lib-calico\") pod \"tigera-operator-76c4974c85-wgkxv\" (UID: \"f787c5a8-6438-4c40-9246-d8be820451a0\") " pod="tigera-operator/tigera-operator-76c4974c85-wgkxv" Jul 2 08:59:11.456356 containerd[2009]: time="2024-07-02T08:59:11.455812388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bfx6x,Uid:f522642c-8a7d-428e-ae0c-915e82f98f49,Namespace:kube-system,Attempt:0,}" Jul 2 08:59:11.497985 containerd[2009]: time="2024-07-02T08:59:11.497825169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:11.498981 containerd[2009]: time="2024-07-02T08:59:11.498291957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:11.498981 containerd[2009]: time="2024-07-02T08:59:11.498542805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:11.498981 containerd[2009]: time="2024-07-02T08:59:11.498587673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:11.537815 systemd[1]: Started cri-containerd-341896989d6cab02f037bd3f9c66a2edd8cbb1246fcc90ec93575ff39dd3e987.scope - libcontainer container 341896989d6cab02f037bd3f9c66a2edd8cbb1246fcc90ec93575ff39dd3e987. Jul 2 08:59:11.592971 containerd[2009]: time="2024-07-02T08:59:11.592878561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bfx6x,Uid:f522642c-8a7d-428e-ae0c-915e82f98f49,Namespace:kube-system,Attempt:0,} returns sandbox id \"341896989d6cab02f037bd3f9c66a2edd8cbb1246fcc90ec93575ff39dd3e987\"" Jul 2 08:59:11.599947 containerd[2009]: time="2024-07-02T08:59:11.599890029Z" level=info msg="CreateContainer within sandbox \"341896989d6cab02f037bd3f9c66a2edd8cbb1246fcc90ec93575ff39dd3e987\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:59:11.627761 containerd[2009]: time="2024-07-02T08:59:11.627624261Z" level=info msg="CreateContainer within sandbox \"341896989d6cab02f037bd3f9c66a2edd8cbb1246fcc90ec93575ff39dd3e987\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a47daec0b0e7e38b6d9ad03b3bbf9c43faf7f4a62d5ed3e78fadf7ce092257d\"" Jul 2 08:59:11.628525 containerd[2009]: time="2024-07-02T08:59:11.628455417Z" level=info msg="StartContainer for \"3a47daec0b0e7e38b6d9ad03b3bbf9c43faf7f4a62d5ed3e78fadf7ce092257d\"" Jul 2 08:59:11.678803 systemd[1]: Started cri-containerd-3a47daec0b0e7e38b6d9ad03b3bbf9c43faf7f4a62d5ed3e78fadf7ce092257d.scope - libcontainer container 3a47daec0b0e7e38b6d9ad03b3bbf9c43faf7f4a62d5ed3e78fadf7ce092257d. Jul 2 08:59:11.772671 containerd[2009]: time="2024-07-02T08:59:11.772508662Z" level=info msg="StartContainer for \"3a47daec0b0e7e38b6d9ad03b3bbf9c43faf7f4a62d5ed3e78fadf7ce092257d\" returns successfully" Jul 2 08:59:12.641551 containerd[2009]: time="2024-07-02T08:59:12.641418934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-wgkxv,Uid:f787c5a8-6438-4c40-9246-d8be820451a0,Namespace:tigera-operator,Attempt:0,}" Jul 2 08:59:12.683258 containerd[2009]: time="2024-07-02T08:59:12.683024999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:12.683663 containerd[2009]: time="2024-07-02T08:59:12.683328095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:12.683663 containerd[2009]: time="2024-07-02T08:59:12.683431175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:12.683663 containerd[2009]: time="2024-07-02T08:59:12.683468471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:12.736825 systemd[1]: Started cri-containerd-179cd9668e4b970fc18ea3c1e2bdaf80517ae3969b186ee3e59c306dad459f94.scope - libcontainer container 179cd9668e4b970fc18ea3c1e2bdaf80517ae3969b186ee3e59c306dad459f94. Jul 2 08:59:12.802241 containerd[2009]: time="2024-07-02T08:59:12.802160771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-wgkxv,Uid:f787c5a8-6438-4c40-9246-d8be820451a0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"179cd9668e4b970fc18ea3c1e2bdaf80517ae3969b186ee3e59c306dad459f94\"" Jul 2 08:59:12.808215 containerd[2009]: time="2024-07-02T08:59:12.807954563Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 08:59:13.281225 systemd[1]: run-containerd-runc-k8s.io-179cd9668e4b970fc18ea3c1e2bdaf80517ae3969b186ee3e59c306dad459f94-runc.wekfcO.mount: Deactivated successfully. Jul 2 08:59:14.325979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498568680.mount: Deactivated successfully. Jul 2 08:59:14.949388 containerd[2009]: time="2024-07-02T08:59:14.948019466Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:14.953287 containerd[2009]: time="2024-07-02T08:59:14.953198606Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473646" Jul 2 08:59:14.954865 containerd[2009]: time="2024-07-02T08:59:14.954769490Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:14.964336 containerd[2009]: time="2024-07-02T08:59:14.964251722Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:14.966360 containerd[2009]: time="2024-07-02T08:59:14.966091526Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.158069007s" Jul 2 08:59:14.966360 containerd[2009]: time="2024-07-02T08:59:14.966154082Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 08:59:14.971398 containerd[2009]: time="2024-07-02T08:59:14.971333618Z" level=info msg="CreateContainer within sandbox \"179cd9668e4b970fc18ea3c1e2bdaf80517ae3969b186ee3e59c306dad459f94\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 08:59:14.993682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount450350351.mount: Deactivated successfully. Jul 2 08:59:14.996361 containerd[2009]: time="2024-07-02T08:59:14.996194534Z" level=info msg="CreateContainer within sandbox \"179cd9668e4b970fc18ea3c1e2bdaf80517ae3969b186ee3e59c306dad459f94\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1eb5d1ce36cc65e0a54f2d501ea6af29b4402484fc22b0216306a7bac38f9e83\"" Jul 2 08:59:14.997100 containerd[2009]: time="2024-07-02T08:59:14.997053542Z" level=info msg="StartContainer for \"1eb5d1ce36cc65e0a54f2d501ea6af29b4402484fc22b0216306a7bac38f9e83\"" Jul 2 08:59:15.064782 systemd[1]: Started cri-containerd-1eb5d1ce36cc65e0a54f2d501ea6af29b4402484fc22b0216306a7bac38f9e83.scope - libcontainer container 1eb5d1ce36cc65e0a54f2d501ea6af29b4402484fc22b0216306a7bac38f9e83. Jul 2 08:59:15.112364 containerd[2009]: time="2024-07-02T08:59:15.112178135Z" level=info msg="StartContainer for \"1eb5d1ce36cc65e0a54f2d501ea6af29b4402484fc22b0216306a7bac38f9e83\" returns successfully" Jul 2 08:59:15.328040 systemd[1]: run-containerd-runc-k8s.io-1eb5d1ce36cc65e0a54f2d501ea6af29b4402484fc22b0216306a7bac38f9e83-runc.7HFBvS.mount: Deactivated successfully. Jul 2 08:59:15.365274 kubelet[3252]: I0702 08:59:15.365209 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bfx6x" podStartSLOduration=4.365146128 podStartE2EDuration="4.365146128s" podCreationTimestamp="2024-07-02 08:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:59:12.360652245 +0000 UTC m=+14.415996229" watchObservedRunningTime="2024-07-02 08:59:15.365146128 +0000 UTC m=+17.420490100" Jul 2 08:59:19.530510 kubelet[3252]: I0702 08:59:19.530001 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-wgkxv" podStartSLOduration=6.368087366 podStartE2EDuration="8.529939877s" podCreationTimestamp="2024-07-02 08:59:11 +0000 UTC" firstStartedPulling="2024-07-02 08:59:12.804729611 +0000 UTC m=+14.860073571" lastFinishedPulling="2024-07-02 08:59:14.966582098 +0000 UTC m=+17.021926082" observedRunningTime="2024-07-02 08:59:15.365725572 +0000 UTC m=+17.421069568" watchObservedRunningTime="2024-07-02 08:59:19.529939877 +0000 UTC m=+21.585283885" Jul 2 08:59:19.530510 kubelet[3252]: I0702 08:59:19.530206 3252 topology_manager.go:215] "Topology Admit Handler" podUID="fbab17a5-64d4-4077-964b-50e756570970" podNamespace="calico-system" podName="calico-typha-6bb4c9b8d-h6b94" Jul 2 08:59:19.549225 systemd[1]: Created slice kubepods-besteffort-podfbab17a5_64d4_4077_964b_50e756570970.slice - libcontainer container kubepods-besteffort-podfbab17a5_64d4_4077_964b_50e756570970.slice. Jul 2 08:59:19.605253 kubelet[3252]: I0702 08:59:19.605180 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfzm6\" (UniqueName: \"kubernetes.io/projected/fbab17a5-64d4-4077-964b-50e756570970-kube-api-access-jfzm6\") pod \"calico-typha-6bb4c9b8d-h6b94\" (UID: \"fbab17a5-64d4-4077-964b-50e756570970\") " pod="calico-system/calico-typha-6bb4c9b8d-h6b94" Jul 2 08:59:19.605412 kubelet[3252]: I0702 08:59:19.605272 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbab17a5-64d4-4077-964b-50e756570970-tigera-ca-bundle\") pod \"calico-typha-6bb4c9b8d-h6b94\" (UID: \"fbab17a5-64d4-4077-964b-50e756570970\") " pod="calico-system/calico-typha-6bb4c9b8d-h6b94" Jul 2 08:59:19.605412 kubelet[3252]: I0702 08:59:19.605328 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fbab17a5-64d4-4077-964b-50e756570970-typha-certs\") pod \"calico-typha-6bb4c9b8d-h6b94\" (UID: \"fbab17a5-64d4-4077-964b-50e756570970\") " pod="calico-system/calico-typha-6bb4c9b8d-h6b94" Jul 2 08:59:19.805553 kubelet[3252]: I0702 08:59:19.805345 3252 topology_manager.go:215] "Topology Admit Handler" podUID="8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd" podNamespace="calico-system" podName="calico-node-vq9tx" Jul 2 08:59:19.830346 systemd[1]: Created slice kubepods-besteffort-pod8ec4d365_c2ea_4c95_b68f_24b0fb45ffbd.slice - libcontainer container kubepods-besteffort-pod8ec4d365_c2ea_4c95_b68f_24b0fb45ffbd.slice. Jul 2 08:59:19.859582 containerd[2009]: time="2024-07-02T08:59:19.859472646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb4c9b8d-h6b94,Uid:fbab17a5-64d4-4077-964b-50e756570970,Namespace:calico-system,Attempt:0,}" Jul 2 08:59:19.907505 containerd[2009]: time="2024-07-02T08:59:19.906593490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:19.907505 containerd[2009]: time="2024-07-02T08:59:19.907307910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:19.907505 containerd[2009]: time="2024-07-02T08:59:19.907358706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:19.907505 containerd[2009]: time="2024-07-02T08:59:19.907396158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:19.908492 kubelet[3252]: I0702 08:59:19.908219 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd-policysync\") pod \"calico-node-vq9tx\" (UID: \"8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd\") " pod="calico-system/calico-node-vq9tx" Jul 2 08:59:19.908492 kubelet[3252]: I0702 08:59:19.908291 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd-var-run-calico\") pod \"calico-node-vq9tx\" (UID: \"8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd\") " pod="calico-system/calico-node-vq9tx" Jul 2 08:59:19.908492 kubelet[3252]: I0702 08:59:19.908421 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd-var-lib-calico\") pod \"calico-node-vq9tx\" (UID: \"8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd\") " pod="calico-system/calico-node-vq9tx" Jul 2 08:59:19.908680 kubelet[3252]: I0702 08:59:19.908477 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd-xtables-lock\") pod \"calico-node-vq9tx\" (UID: \"8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd\") " pod="calico-system/calico-node-vq9tx" Jul 2 08:59:19.908680 kubelet[3252]: I0702 08:59:19.908565 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd-flexvol-driver-host\") pod \"calico-node-vq9tx\" (UID: \"8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd\") " pod="calico-system/calico-node-vq9tx" Jul 2 08:59:19.908680 kubelet[3252]: I0702 08:59:19.908612 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd-cni-log-dir\") pod \"calico-node-vq9tx\" (UID: \"8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd\") " pod="calico-system/calico-node-vq9tx" Jul 2 08:59:19.908680 kubelet[3252]: I0702 08:59:19.908660 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd-cni-net-dir\") pod \"calico-node-vq9tx\" (UID: \"8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd\") " pod="calico-system/calico-node-vq9tx" Jul 2 08:59:19.908902 kubelet[3252]: I0702 08:59:19.908707 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmdkl\" (UniqueName: \"kubernetes.io/projected/8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd-kube-api-access-fmdkl\") pod \"calico-node-vq9tx\" (UID: \"8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd\") " pod="calico-system/calico-node-vq9tx" Jul 2 08:59:19.908902 kubelet[3252]: I0702 08:59:19.908752 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd-tigera-ca-bundle\") pod \"calico-node-vq9tx\" (UID: \"8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd\") " pod="calico-system/calico-node-vq9tx" Jul 2 08:59:19.908902 kubelet[3252]: I0702 08:59:19.908802 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd-node-certs\") pod \"calico-node-vq9tx\" (UID: \"8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd\") " pod="calico-system/calico-node-vq9tx" Jul 2 08:59:19.908902 kubelet[3252]: I0702 08:59:19.908846 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd-lib-modules\") pod \"calico-node-vq9tx\" (UID: \"8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd\") " pod="calico-system/calico-node-vq9tx" Jul 2 08:59:19.908902 kubelet[3252]: I0702 08:59:19.908891 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd-cni-bin-dir\") pod \"calico-node-vq9tx\" (UID: \"8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd\") " pod="calico-system/calico-node-vq9tx" Jul 2 08:59:19.956875 systemd[1]: Started cri-containerd-94f572b5e80d5f2bbbeee106f936961a26eb627e5ebdd3bee24b89e913dffb90.scope - libcontainer container 94f572b5e80d5f2bbbeee106f936961a26eb627e5ebdd3bee24b89e913dffb90. Jul 2 08:59:19.991543 kubelet[3252]: I0702 08:59:19.991316 3252 topology_manager.go:215] "Topology Admit Handler" podUID="854c111d-7e31-40e1-a3bc-810de7814240" podNamespace="calico-system" podName="csi-node-driver-58jnw" Jul 2 08:59:19.994060 kubelet[3252]: E0702 08:59:19.993890 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-58jnw" podUID="854c111d-7e31-40e1-a3bc-810de7814240" Jul 2 08:59:20.012384 kubelet[3252]: E0702 08:59:20.012105 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.012384 kubelet[3252]: W0702 08:59:20.012147 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.012384 kubelet[3252]: E0702 08:59:20.012199 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.014532 kubelet[3252]: E0702 08:59:20.013355 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.014947 kubelet[3252]: W0702 08:59:20.014309 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.014947 kubelet[3252]: E0702 08:59:20.014735 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.018712 kubelet[3252]: E0702 08:59:20.017819 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.018712 kubelet[3252]: W0702 08:59:20.017886 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.018712 kubelet[3252]: E0702 08:59:20.018656 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.020086 kubelet[3252]: E0702 08:59:20.019718 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.020086 kubelet[3252]: W0702 08:59:20.019754 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.020086 kubelet[3252]: E0702 08:59:20.019917 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.022419 kubelet[3252]: E0702 08:59:20.022118 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.022419 kubelet[3252]: W0702 08:59:20.022153 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.022419 kubelet[3252]: E0702 08:59:20.022306 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.023428 kubelet[3252]: E0702 08:59:20.023141 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.023428 kubelet[3252]: W0702 08:59:20.023171 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.023428 kubelet[3252]: E0702 08:59:20.023298 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.024368 kubelet[3252]: E0702 08:59:20.024123 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.024368 kubelet[3252]: W0702 08:59:20.024153 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.024368 kubelet[3252]: E0702 08:59:20.024287 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.026879 kubelet[3252]: E0702 08:59:20.026400 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.026879 kubelet[3252]: W0702 08:59:20.026438 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.029871 kubelet[3252]: E0702 08:59:20.029368 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.031395 kubelet[3252]: E0702 08:59:20.031002 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.031395 kubelet[3252]: W0702 08:59:20.031039 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.032654 kubelet[3252]: E0702 08:59:20.032074 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.035133 kubelet[3252]: E0702 08:59:20.035079 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.036405 kubelet[3252]: W0702 08:59:20.036359 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.037160 kubelet[3252]: E0702 08:59:20.037128 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.037708 kubelet[3252]: W0702 08:59:20.037333 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.038544 kubelet[3252]: E0702 08:59:20.038413 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.039976 kubelet[3252]: E0702 08:59:20.039827 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.039976 kubelet[3252]: E0702 08:59:20.039912 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.040602 kubelet[3252]: W0702 08:59:20.040299 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.040602 kubelet[3252]: E0702 08:59:20.040393 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.043696 kubelet[3252]: E0702 08:59:20.043176 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.043696 kubelet[3252]: W0702 08:59:20.043225 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.045076 kubelet[3252]: E0702 08:59:20.044138 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.045076 kubelet[3252]: W0702 08:59:20.044767 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.046954 kubelet[3252]: E0702 08:59:20.044459 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.046954 kubelet[3252]: E0702 08:59:20.045791 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.047924 kubelet[3252]: E0702 08:59:20.047890 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.048418 kubelet[3252]: W0702 08:59:20.048206 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.050339 kubelet[3252]: E0702 08:59:20.050135 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.051604 kubelet[3252]: E0702 08:59:20.050956 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.051604 kubelet[3252]: W0702 08:59:20.051459 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.052500 kubelet[3252]: E0702 08:59:20.051996 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.054020 kubelet[3252]: E0702 08:59:20.053800 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.054020 kubelet[3252]: W0702 08:59:20.053857 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.054020 kubelet[3252]: E0702 08:59:20.053947 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.060666 kubelet[3252]: E0702 08:59:20.058705 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.060666 kubelet[3252]: W0702 08:59:20.058740 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.060666 kubelet[3252]: E0702 08:59:20.059241 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.060666 kubelet[3252]: W0702 08:59:20.059276 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.061179 kubelet[3252]: E0702 08:59:20.061037 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.061179 kubelet[3252]: E0702 08:59:20.061135 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.062233 kubelet[3252]: E0702 08:59:20.061436 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.062233 kubelet[3252]: W0702 08:59:20.061460 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.062972 kubelet[3252]: E0702 08:59:20.062864 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.063904 kubelet[3252]: E0702 08:59:20.063586 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.063904 kubelet[3252]: W0702 08:59:20.063614 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.065066 kubelet[3252]: E0702 08:59:20.064431 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.068411 kubelet[3252]: E0702 08:59:20.066982 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.068411 kubelet[3252]: W0702 08:59:20.067016 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.068739 kubelet[3252]: E0702 08:59:20.068711 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.070460 kubelet[3252]: E0702 08:59:20.070426 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.071038 kubelet[3252]: W0702 08:59:20.070621 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.071607 kubelet[3252]: E0702 08:59:20.071525 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.072005 kubelet[3252]: E0702 08:59:20.071916 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.072005 kubelet[3252]: W0702 08:59:20.071937 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.072370 kubelet[3252]: E0702 08:59:20.072310 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.074059 kubelet[3252]: E0702 08:59:20.073792 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.074059 kubelet[3252]: W0702 08:59:20.073841 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.075536 kubelet[3252]: E0702 08:59:20.074813 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.075952 kubelet[3252]: E0702 08:59:20.075904 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.076344 kubelet[3252]: W0702 08:59:20.076311 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.076842 kubelet[3252]: E0702 08:59:20.076809 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.080893 kubelet[3252]: E0702 08:59:20.080841 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.081197 kubelet[3252]: W0702 08:59:20.081089 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.081197 kubelet[3252]: E0702 08:59:20.081139 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.090638 kubelet[3252]: E0702 08:59:20.090590 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.090965 kubelet[3252]: W0702 08:59:20.090832 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.090965 kubelet[3252]: E0702 08:59:20.090879 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.091698 kubelet[3252]: E0702 08:59:20.091442 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.091698 kubelet[3252]: W0702 08:59:20.091543 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.091698 kubelet[3252]: E0702 08:59:20.091626 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.092899 kubelet[3252]: E0702 08:59:20.092638 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.092899 kubelet[3252]: W0702 08:59:20.092670 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.092899 kubelet[3252]: E0702 08:59:20.092767 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.093885 kubelet[3252]: E0702 08:59:20.093660 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.093885 kubelet[3252]: W0702 08:59:20.093717 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.093885 kubelet[3252]: E0702 08:59:20.093750 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.095239 kubelet[3252]: E0702 08:59:20.094886 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.095239 kubelet[3252]: W0702 08:59:20.094918 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.095239 kubelet[3252]: E0702 08:59:20.095062 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.095838 kubelet[3252]: E0702 08:59:20.095801 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.096111 kubelet[3252]: W0702 08:59:20.095942 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.096111 kubelet[3252]: E0702 08:59:20.095978 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.098048 kubelet[3252]: E0702 08:59:20.097816 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.098048 kubelet[3252]: W0702 08:59:20.097877 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.098048 kubelet[3252]: E0702 08:59:20.097915 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.099609 kubelet[3252]: E0702 08:59:20.099056 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.099609 kubelet[3252]: W0702 08:59:20.099087 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.099609 kubelet[3252]: E0702 08:59:20.099122 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.101386 kubelet[3252]: E0702 08:59:20.100912 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.101386 kubelet[3252]: W0702 08:59:20.100948 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.101386 kubelet[3252]: E0702 08:59:20.100984 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.103894 kubelet[3252]: E0702 08:59:20.102965 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.103894 kubelet[3252]: W0702 08:59:20.103049 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.103894 kubelet[3252]: E0702 08:59:20.103103 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.105196 kubelet[3252]: E0702 08:59:20.104991 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.105196 kubelet[3252]: W0702 08:59:20.105026 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.105196 kubelet[3252]: E0702 08:59:20.105063 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.106397 kubelet[3252]: E0702 08:59:20.106120 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.106397 kubelet[3252]: W0702 08:59:20.106152 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.106397 kubelet[3252]: E0702 08:59:20.106203 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.107443 kubelet[3252]: E0702 08:59:20.107410 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.107873 kubelet[3252]: W0702 08:59:20.107634 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.107873 kubelet[3252]: E0702 08:59:20.107684 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.108988 kubelet[3252]: E0702 08:59:20.108780 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.108988 kubelet[3252]: W0702 08:59:20.108813 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.108988 kubelet[3252]: E0702 08:59:20.108847 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.109450 kubelet[3252]: E0702 08:59:20.109430 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.109687 kubelet[3252]: W0702 08:59:20.109568 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.109687 kubelet[3252]: E0702 08:59:20.109603 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.110280 kubelet[3252]: E0702 08:59:20.110141 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.110280 kubelet[3252]: W0702 08:59:20.110168 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.110280 kubelet[3252]: E0702 08:59:20.110216 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.111535 kubelet[3252]: E0702 08:59:20.111314 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.111535 kubelet[3252]: W0702 08:59:20.111344 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.111535 kubelet[3252]: E0702 08:59:20.111437 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.112236 kubelet[3252]: I0702 08:59:20.111980 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/854c111d-7e31-40e1-a3bc-810de7814240-kubelet-dir\") pod \"csi-node-driver-58jnw\" (UID: \"854c111d-7e31-40e1-a3bc-810de7814240\") " pod="calico-system/csi-node-driver-58jnw" Jul 2 08:59:20.112573 kubelet[3252]: E0702 08:59:20.112445 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.112573 kubelet[3252]: W0702 08:59:20.112470 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.112936 kubelet[3252]: E0702 08:59:20.112787 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.113538 kubelet[3252]: E0702 08:59:20.113512 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.113823 kubelet[3252]: W0702 08:59:20.113691 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.113958 kubelet[3252]: E0702 08:59:20.113923 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.114606 kubelet[3252]: E0702 08:59:20.114379 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.114606 kubelet[3252]: W0702 08:59:20.114409 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.114606 kubelet[3252]: E0702 08:59:20.114446 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.114606 kubelet[3252]: I0702 08:59:20.114525 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/854c111d-7e31-40e1-a3bc-810de7814240-socket-dir\") pod \"csi-node-driver-58jnw\" (UID: \"854c111d-7e31-40e1-a3bc-810de7814240\") " pod="calico-system/csi-node-driver-58jnw" Jul 2 08:59:20.115420 kubelet[3252]: E0702 08:59:20.115298 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.115420 kubelet[3252]: W0702 08:59:20.115321 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.115781 kubelet[3252]: E0702 08:59:20.115615 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.115781 kubelet[3252]: I0702 08:59:20.115671 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/854c111d-7e31-40e1-a3bc-810de7814240-registration-dir\") pod \"csi-node-driver-58jnw\" (UID: \"854c111d-7e31-40e1-a3bc-810de7814240\") " pod="calico-system/csi-node-driver-58jnw" Jul 2 08:59:20.116369 kubelet[3252]: E0702 08:59:20.116218 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.116369 kubelet[3252]: W0702 08:59:20.116270 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.116893 kubelet[3252]: E0702 08:59:20.116714 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.117104 kubelet[3252]: E0702 08:59:20.117074 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.117391 kubelet[3252]: W0702 08:59:20.117254 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.117391 kubelet[3252]: E0702 08:59:20.117345 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.118133 kubelet[3252]: E0702 08:59:20.117993 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.118133 kubelet[3252]: W0702 08:59:20.118015 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.118373 kubelet[3252]: E0702 08:59:20.118303 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.118917 kubelet[3252]: E0702 08:59:20.118697 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.118917 kubelet[3252]: W0702 08:59:20.118716 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.119194 kubelet[3252]: E0702 08:59:20.119135 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.120453 kubelet[3252]: E0702 08:59:20.120278 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.120453 kubelet[3252]: W0702 08:59:20.120312 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.121421 kubelet[3252]: E0702 08:59:20.121176 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.121421 kubelet[3252]: W0702 08:59:20.121219 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.122303 kubelet[3252]: E0702 08:59:20.121894 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.122303 kubelet[3252]: W0702 08:59:20.121924 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.122303 kubelet[3252]: E0702 08:59:20.121960 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.122303 kubelet[3252]: E0702 08:59:20.122004 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.122303 kubelet[3252]: E0702 08:59:20.122025 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.123044 kubelet[3252]: E0702 08:59:20.123016 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.123205 kubelet[3252]: W0702 08:59:20.123177 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.123343 kubelet[3252]: E0702 08:59:20.123321 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.123666 kubelet[3252]: I0702 08:59:20.123468 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/854c111d-7e31-40e1-a3bc-810de7814240-varrun\") pod \"csi-node-driver-58jnw\" (UID: \"854c111d-7e31-40e1-a3bc-810de7814240\") " pod="calico-system/csi-node-driver-58jnw" Jul 2 08:59:20.124629 kubelet[3252]: E0702 08:59:20.124580 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.124888 kubelet[3252]: W0702 08:59:20.124805 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.125083 kubelet[3252]: E0702 08:59:20.124849 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.125946 kubelet[3252]: E0702 08:59:20.125916 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.126112 kubelet[3252]: W0702 08:59:20.126086 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.126306 kubelet[3252]: E0702 08:59:20.126246 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.128318 kubelet[3252]: E0702 08:59:20.127138 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.128318 kubelet[3252]: W0702 08:59:20.128251 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.128807 kubelet[3252]: E0702 08:59:20.128569 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.129407 kubelet[3252]: E0702 08:59:20.129344 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.129730 kubelet[3252]: W0702 08:59:20.129477 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.129877 kubelet[3252]: E0702 08:59:20.129828 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.140010 kubelet[3252]: E0702 08:59:20.139943 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.140010 kubelet[3252]: W0702 08:59:20.139980 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.140010 kubelet[3252]: E0702 08:59:20.140018 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.228972 kubelet[3252]: E0702 08:59:20.228906 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.228972 kubelet[3252]: W0702 08:59:20.228947 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.228972 kubelet[3252]: E0702 08:59:20.228988 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.231036 kubelet[3252]: E0702 08:59:20.230632 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.231036 kubelet[3252]: W0702 08:59:20.230782 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.232037 kubelet[3252]: E0702 08:59:20.231184 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.232892 kubelet[3252]: E0702 08:59:20.232615 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.232892 kubelet[3252]: W0702 08:59:20.232774 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.233666 kubelet[3252]: E0702 08:59:20.233234 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.233666 kubelet[3252]: E0702 08:59:20.233571 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.233666 kubelet[3252]: W0702 08:59:20.233592 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.233666 kubelet[3252]: E0702 08:59:20.233634 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.235351 kubelet[3252]: E0702 08:59:20.234954 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.235351 kubelet[3252]: W0702 08:59:20.234985 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.235794 kubelet[3252]: E0702 08:59:20.235088 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.235794 kubelet[3252]: I0702 08:59:20.235631 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q76rs\" (UniqueName: \"kubernetes.io/projected/854c111d-7e31-40e1-a3bc-810de7814240-kube-api-access-q76rs\") pod \"csi-node-driver-58jnw\" (UID: \"854c111d-7e31-40e1-a3bc-810de7814240\") " pod="calico-system/csi-node-driver-58jnw" Jul 2 08:59:20.235954 kubelet[3252]: E0702 08:59:20.235891 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.235954 kubelet[3252]: W0702 08:59:20.235910 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.237194 kubelet[3252]: E0702 08:59:20.236175 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.237194 kubelet[3252]: E0702 08:59:20.237027 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.237194 kubelet[3252]: W0702 08:59:20.237055 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.237984 kubelet[3252]: E0702 08:59:20.237533 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.238232 containerd[2009]: time="2024-07-02T08:59:20.238167952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb4c9b8d-h6b94,Uid:fbab17a5-64d4-4077-964b-50e756570970,Namespace:calico-system,Attempt:0,} returns sandbox id \"94f572b5e80d5f2bbbeee106f936961a26eb627e5ebdd3bee24b89e913dffb90\"" Jul 2 08:59:20.238724 kubelet[3252]: E0702 08:59:20.238632 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.238724 kubelet[3252]: W0702 08:59:20.238657 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.240268 kubelet[3252]: E0702 08:59:20.239834 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.240268 kubelet[3252]: E0702 08:59:20.239998 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.240268 kubelet[3252]: W0702 08:59:20.240092 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.241015 kubelet[3252]: E0702 08:59:20.240495 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.242904 kubelet[3252]: E0702 08:59:20.242813 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.242904 kubelet[3252]: W0702 08:59:20.242847 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.243888 kubelet[3252]: E0702 08:59:20.243831 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.243888 kubelet[3252]: W0702 08:59:20.243866 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.244847 kubelet[3252]: E0702 08:59:20.244782 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.244847 kubelet[3252]: W0702 08:59:20.244816 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.246521 kubelet[3252]: E0702 08:59:20.246432 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.246521 kubelet[3252]: E0702 08:59:20.246501 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.246521 kubelet[3252]: E0702 08:59:20.246530 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.246890 kubelet[3252]: E0702 08:59:20.246646 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.246890 kubelet[3252]: W0702 08:59:20.246662 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.248061 kubelet[3252]: E0702 08:59:20.248027 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.249833 containerd[2009]: time="2024-07-02T08:59:20.249775576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 08:59:20.250451 kubelet[3252]: E0702 08:59:20.250396 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.250451 kubelet[3252]: W0702 08:59:20.250428 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.251120 kubelet[3252]: E0702 08:59:20.250724 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.251120 kubelet[3252]: E0702 08:59:20.250876 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.251120 kubelet[3252]: W0702 08:59:20.250896 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.251120 kubelet[3252]: E0702 08:59:20.250978 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.251475 kubelet[3252]: E0702 08:59:20.251317 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.251475 kubelet[3252]: W0702 08:59:20.251335 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.254693 kubelet[3252]: E0702 08:59:20.253190 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.254693 kubelet[3252]: W0702 08:59:20.253227 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.254693 kubelet[3252]: E0702 08:59:20.253807 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.254910 kubelet[3252]: E0702 08:59:20.254767 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.254910 kubelet[3252]: W0702 08:59:20.254793 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.255561 kubelet[3252]: E0702 08:59:20.255309 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.255561 kubelet[3252]: W0702 08:59:20.255340 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.255561 kubelet[3252]: E0702 08:59:20.255379 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.257108 kubelet[3252]: E0702 08:59:20.256583 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.257108 kubelet[3252]: W0702 08:59:20.256621 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.257108 kubelet[3252]: E0702 08:59:20.256648 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.257108 kubelet[3252]: E0702 08:59:20.256653 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.257108 kubelet[3252]: E0702 08:59:20.256625 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.258168 kubelet[3252]: E0702 08:59:20.257533 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.258168 kubelet[3252]: W0702 08:59:20.257558 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.258168 kubelet[3252]: E0702 08:59:20.257609 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.259936 kubelet[3252]: E0702 08:59:20.258819 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.259936 kubelet[3252]: W0702 08:59:20.258850 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.259936 kubelet[3252]: E0702 08:59:20.258899 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.260520 kubelet[3252]: E0702 08:59:20.260461 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.260744 kubelet[3252]: W0702 08:59:20.260631 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.260744 kubelet[3252]: E0702 08:59:20.260675 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.344536 kubelet[3252]: E0702 08:59:20.341719 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.344536 kubelet[3252]: W0702 08:59:20.341793 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.344536 kubelet[3252]: E0702 08:59:20.341830 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.345847 kubelet[3252]: E0702 08:59:20.345535 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.345847 kubelet[3252]: W0702 08:59:20.345594 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.345847 kubelet[3252]: E0702 08:59:20.345652 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.346941 kubelet[3252]: E0702 08:59:20.346515 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.346941 kubelet[3252]: W0702 08:59:20.346546 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.346941 kubelet[3252]: E0702 08:59:20.346579 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.349595 kubelet[3252]: E0702 08:59:20.349356 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.349595 kubelet[3252]: W0702 08:59:20.349389 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.349595 kubelet[3252]: E0702 08:59:20.349449 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.354509 kubelet[3252]: E0702 08:59:20.350581 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.354509 kubelet[3252]: W0702 08:59:20.351571 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.354509 kubelet[3252]: E0702 08:59:20.351620 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.381030 kubelet[3252]: E0702 08:59:20.380996 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:20.381309 kubelet[3252]: W0702 08:59:20.381215 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:20.381309 kubelet[3252]: E0702 08:59:20.381258 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:20.440254 containerd[2009]: time="2024-07-02T08:59:20.439639841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vq9tx,Uid:8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd,Namespace:calico-system,Attempt:0,}" Jul 2 08:59:20.494655 containerd[2009]: time="2024-07-02T08:59:20.494509793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:20.494925 containerd[2009]: time="2024-07-02T08:59:20.494847269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:20.495341 containerd[2009]: time="2024-07-02T08:59:20.495268013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:20.495737 containerd[2009]: time="2024-07-02T08:59:20.495664361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:20.536832 systemd[1]: Started cri-containerd-07a7a8793175c507da1d2dae585ddf7c034811a5d3e256d11426535ad998e16d.scope - libcontainer container 07a7a8793175c507da1d2dae585ddf7c034811a5d3e256d11426535ad998e16d. Jul 2 08:59:20.610029 containerd[2009]: time="2024-07-02T08:59:20.609840678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vq9tx,Uid:8ec4d365-c2ea-4c95-b68f-24b0fb45ffbd,Namespace:calico-system,Attempt:0,} returns sandbox id \"07a7a8793175c507da1d2dae585ddf7c034811a5d3e256d11426535ad998e16d\"" Jul 2 08:59:22.185968 kubelet[3252]: E0702 08:59:22.185878 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-58jnw" podUID="854c111d-7e31-40e1-a3bc-810de7814240" Jul 2 08:59:22.582969 containerd[2009]: time="2024-07-02T08:59:22.582243428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:22.585753 containerd[2009]: time="2024-07-02T08:59:22.585683480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 08:59:22.588750 containerd[2009]: time="2024-07-02T08:59:22.588306128Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:22.593828 containerd[2009]: time="2024-07-02T08:59:22.593752964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:22.597961 containerd[2009]: time="2024-07-02T08:59:22.597814748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.347969344s" Jul 2 08:59:22.597961 containerd[2009]: time="2024-07-02T08:59:22.597880172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 08:59:22.600094 containerd[2009]: time="2024-07-02T08:59:22.599725952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 08:59:22.640672 containerd[2009]: time="2024-07-02T08:59:22.640584404Z" level=info msg="CreateContainer within sandbox \"94f572b5e80d5f2bbbeee106f936961a26eb627e5ebdd3bee24b89e913dffb90\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 08:59:22.682890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76028519.mount: Deactivated successfully. Jul 2 08:59:22.694748 containerd[2009]: time="2024-07-02T08:59:22.694675508Z" level=info msg="CreateContainer within sandbox \"94f572b5e80d5f2bbbeee106f936961a26eb627e5ebdd3bee24b89e913dffb90\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e2478d6a03e2f34336601f173e00fcfdf590792de2da62132d87db350743e369\"" Jul 2 08:59:22.696226 containerd[2009]: time="2024-07-02T08:59:22.696151892Z" level=info msg="StartContainer for \"e2478d6a03e2f34336601f173e00fcfdf590792de2da62132d87db350743e369\"" Jul 2 08:59:22.771796 systemd[1]: Started cri-containerd-e2478d6a03e2f34336601f173e00fcfdf590792de2da62132d87db350743e369.scope - libcontainer container e2478d6a03e2f34336601f173e00fcfdf590792de2da62132d87db350743e369. Jul 2 08:59:22.871283 containerd[2009]: time="2024-07-02T08:59:22.871028445Z" level=info msg="StartContainer for \"e2478d6a03e2f34336601f173e00fcfdf590792de2da62132d87db350743e369\" returns successfully" Jul 2 08:59:23.413439 kubelet[3252]: I0702 08:59:23.412827 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6bb4c9b8d-h6b94" podStartSLOduration=2.060379156 podStartE2EDuration="4.412752992s" podCreationTimestamp="2024-07-02 08:59:19 +0000 UTC" firstStartedPulling="2024-07-02 08:59:20.245901928 +0000 UTC m=+22.301245900" lastFinishedPulling="2024-07-02 08:59:22.598275776 +0000 UTC m=+24.653619736" observedRunningTime="2024-07-02 08:59:23.411712916 +0000 UTC m=+25.467056912" watchObservedRunningTime="2024-07-02 08:59:23.412752992 +0000 UTC m=+25.468096988" Jul 2 08:59:23.453639 kubelet[3252]: E0702 08:59:23.453145 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.453639 kubelet[3252]: W0702 08:59:23.453181 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.453639 kubelet[3252]: E0702 08:59:23.453514 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.455611 kubelet[3252]: E0702 08:59:23.455357 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.455611 kubelet[3252]: W0702 08:59:23.455425 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.455611 kubelet[3252]: E0702 08:59:23.455537 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.458862 kubelet[3252]: E0702 08:59:23.458547 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.458862 kubelet[3252]: W0702 08:59:23.458602 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.458862 kubelet[3252]: E0702 08:59:23.458637 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.461798 kubelet[3252]: E0702 08:59:23.461543 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.461798 kubelet[3252]: W0702 08:59:23.461605 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.461798 kubelet[3252]: E0702 08:59:23.461645 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.464717 kubelet[3252]: E0702 08:59:23.464200 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.464717 kubelet[3252]: W0702 08:59:23.464233 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.464717 kubelet[3252]: E0702 08:59:23.464294 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.467864 kubelet[3252]: E0702 08:59:23.466447 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.467864 kubelet[3252]: W0702 08:59:23.466528 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.467864 kubelet[3252]: E0702 08:59:23.466569 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.468448 kubelet[3252]: E0702 08:59:23.468206 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.468448 kubelet[3252]: W0702 08:59:23.468263 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.468448 kubelet[3252]: E0702 08:59:23.468300 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.469389 kubelet[3252]: E0702 08:59:23.469268 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.469389 kubelet[3252]: W0702 08:59:23.469319 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.469389 kubelet[3252]: E0702 08:59:23.469352 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.474701 kubelet[3252]: E0702 08:59:23.472656 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.474701 kubelet[3252]: W0702 08:59:23.472801 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.474701 kubelet[3252]: E0702 08:59:23.474629 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.475834 kubelet[3252]: E0702 08:59:23.475602 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.475834 kubelet[3252]: W0702 08:59:23.475657 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.475834 kubelet[3252]: E0702 08:59:23.475695 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.476809 kubelet[3252]: E0702 08:59:23.476660 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.476809 kubelet[3252]: W0702 08:59:23.476691 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.476809 kubelet[3252]: E0702 08:59:23.476752 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.478838 kubelet[3252]: E0702 08:59:23.478211 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.478838 kubelet[3252]: W0702 08:59:23.478245 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.478838 kubelet[3252]: E0702 08:59:23.478296 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.480050 kubelet[3252]: E0702 08:59:23.479734 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.480050 kubelet[3252]: W0702 08:59:23.479770 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.480050 kubelet[3252]: E0702 08:59:23.479828 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.481986 kubelet[3252]: E0702 08:59:23.481930 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.481986 kubelet[3252]: W0702 08:59:23.481973 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.482362 kubelet[3252]: E0702 08:59:23.482012 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.484066 kubelet[3252]: E0702 08:59:23.484004 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.484066 kubelet[3252]: W0702 08:59:23.484055 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.484351 kubelet[3252]: E0702 08:59:23.484093 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.484894 kubelet[3252]: E0702 08:59:23.484848 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.484894 kubelet[3252]: W0702 08:59:23.484885 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.485475 kubelet[3252]: E0702 08:59:23.484921 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.485878 kubelet[3252]: E0702 08:59:23.485829 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.485878 kubelet[3252]: W0702 08:59:23.485864 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.487624 kubelet[3252]: E0702 08:59:23.485913 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.488084 kubelet[3252]: E0702 08:59:23.488042 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.488084 kubelet[3252]: W0702 08:59:23.488080 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.488387 kubelet[3252]: E0702 08:59:23.488127 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.489187 kubelet[3252]: E0702 08:59:23.488953 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.489187 kubelet[3252]: W0702 08:59:23.488985 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.489187 kubelet[3252]: E0702 08:59:23.489061 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.490238 kubelet[3252]: E0702 08:59:23.489948 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.490238 kubelet[3252]: W0702 08:59:23.490054 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.490238 kubelet[3252]: E0702 08:59:23.490144 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.491250 kubelet[3252]: E0702 08:59:23.491007 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.491250 kubelet[3252]: W0702 08:59:23.491035 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.491250 kubelet[3252]: E0702 08:59:23.491136 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.492128 kubelet[3252]: E0702 08:59:23.491905 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.492128 kubelet[3252]: W0702 08:59:23.491936 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.492671 kubelet[3252]: E0702 08:59:23.492367 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.493010 kubelet[3252]: E0702 08:59:23.492939 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.493010 kubelet[3252]: W0702 08:59:23.492966 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.493347 kubelet[3252]: E0702 08:59:23.493176 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.496254 kubelet[3252]: E0702 08:59:23.495563 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.496254 kubelet[3252]: W0702 08:59:23.495606 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.496254 kubelet[3252]: E0702 08:59:23.496183 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.496254 kubelet[3252]: W0702 08:59:23.496203 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.496706 kubelet[3252]: E0702 08:59:23.496275 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.496706 kubelet[3252]: E0702 08:59:23.496316 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.498852 kubelet[3252]: E0702 08:59:23.498689 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.498852 kubelet[3252]: W0702 08:59:23.498722 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.499616 kubelet[3252]: E0702 08:59:23.499169 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.501533 kubelet[3252]: E0702 08:59:23.500953 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.501533 kubelet[3252]: W0702 08:59:23.500981 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.501533 kubelet[3252]: E0702 08:59:23.501026 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.502704 kubelet[3252]: E0702 08:59:23.502390 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.502704 kubelet[3252]: W0702 08:59:23.502418 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.502704 kubelet[3252]: E0702 08:59:23.502513 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.504381 kubelet[3252]: E0702 08:59:23.503892 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.504381 kubelet[3252]: W0702 08:59:23.503938 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.504381 kubelet[3252]: E0702 08:59:23.504056 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.505147 kubelet[3252]: E0702 08:59:23.505013 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.505147 kubelet[3252]: W0702 08:59:23.505044 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.505147 kubelet[3252]: E0702 08:59:23.505128 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.506010 kubelet[3252]: E0702 08:59:23.505785 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.506010 kubelet[3252]: W0702 08:59:23.505836 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.506500 kubelet[3252]: E0702 08:59:23.506294 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.507914 kubelet[3252]: E0702 08:59:23.506712 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.507914 kubelet[3252]: W0702 08:59:23.506739 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.507914 kubelet[3252]: E0702 08:59:23.506772 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.508302 kubelet[3252]: E0702 08:59:23.508275 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:23.508442 kubelet[3252]: W0702 08:59:23.508415 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:23.508588 kubelet[3252]: E0702 08:59:23.508565 3252 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:23.992667 containerd[2009]: time="2024-07-02T08:59:23.992453891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:23.996957 containerd[2009]: time="2024-07-02T08:59:23.996839507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 08:59:23.998434 containerd[2009]: time="2024-07-02T08:59:23.998363039Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:24.010544 containerd[2009]: time="2024-07-02T08:59:24.009580447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:24.012662 containerd[2009]: time="2024-07-02T08:59:24.012441427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.412651323s" Jul 2 08:59:24.012662 containerd[2009]: time="2024-07-02T08:59:24.012528427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 08:59:24.016271 containerd[2009]: time="2024-07-02T08:59:24.016171507Z" level=info msg="CreateContainer within sandbox \"07a7a8793175c507da1d2dae585ddf7c034811a5d3e256d11426535ad998e16d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 08:59:24.042415 containerd[2009]: time="2024-07-02T08:59:24.042341359Z" level=info msg="CreateContainer within sandbox \"07a7a8793175c507da1d2dae585ddf7c034811a5d3e256d11426535ad998e16d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aff140923c2f9b657e516bb67f32e586898e4c7c1a8f3747f08e6be092891a52\"" Jul 2 08:59:24.044287 containerd[2009]: time="2024-07-02T08:59:24.043830103Z" level=info msg="StartContainer for \"aff140923c2f9b657e516bb67f32e586898e4c7c1a8f3747f08e6be092891a52\"" Jul 2 08:59:24.126954 systemd[1]: Started cri-containerd-aff140923c2f9b657e516bb67f32e586898e4c7c1a8f3747f08e6be092891a52.scope - libcontainer container aff140923c2f9b657e516bb67f32e586898e4c7c1a8f3747f08e6be092891a52. Jul 2 08:59:24.188801 kubelet[3252]: E0702 08:59:24.188743 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-58jnw" podUID="854c111d-7e31-40e1-a3bc-810de7814240" Jul 2 08:59:24.204261 containerd[2009]: time="2024-07-02T08:59:24.204085184Z" level=info msg="StartContainer for \"aff140923c2f9b657e516bb67f32e586898e4c7c1a8f3747f08e6be092891a52\" returns successfully" Jul 2 08:59:24.239815 systemd[1]: cri-containerd-aff140923c2f9b657e516bb67f32e586898e4c7c1a8f3747f08e6be092891a52.scope: Deactivated successfully. Jul 2 08:59:24.612371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aff140923c2f9b657e516bb67f32e586898e4c7c1a8f3747f08e6be092891a52-rootfs.mount: Deactivated successfully. Jul 2 08:59:24.837325 containerd[2009]: time="2024-07-02T08:59:24.837247631Z" level=info msg="shim disconnected" id=aff140923c2f9b657e516bb67f32e586898e4c7c1a8f3747f08e6be092891a52 namespace=k8s.io Jul 2 08:59:24.838002 containerd[2009]: time="2024-07-02T08:59:24.837626303Z" level=warning msg="cleaning up after shim disconnected" id=aff140923c2f9b657e516bb67f32e586898e4c7c1a8f3747f08e6be092891a52 namespace=k8s.io Jul 2 08:59:24.838002 containerd[2009]: time="2024-07-02T08:59:24.837661559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:59:24.869428 containerd[2009]: time="2024-07-02T08:59:24.869340887Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:59:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 08:59:25.408803 containerd[2009]: time="2024-07-02T08:59:25.408742570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 08:59:26.184316 kubelet[3252]: E0702 08:59:26.183652 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-58jnw" podUID="854c111d-7e31-40e1-a3bc-810de7814240" Jul 2 08:59:28.185973 kubelet[3252]: E0702 08:59:28.185916 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-58jnw" podUID="854c111d-7e31-40e1-a3bc-810de7814240" Jul 2 08:59:29.412511 containerd[2009]: time="2024-07-02T08:59:29.412413710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:29.415412 containerd[2009]: time="2024-07-02T08:59:29.415297106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 08:59:29.416585 containerd[2009]: time="2024-07-02T08:59:29.416500802Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:29.421172 containerd[2009]: time="2024-07-02T08:59:29.421086326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:29.422968 containerd[2009]: time="2024-07-02T08:59:29.422848274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 4.014034376s" Jul 2 08:59:29.422968 containerd[2009]: time="2024-07-02T08:59:29.422927078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 08:59:29.428875 containerd[2009]: time="2024-07-02T08:59:29.428624498Z" level=info msg="CreateContainer within sandbox \"07a7a8793175c507da1d2dae585ddf7c034811a5d3e256d11426535ad998e16d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 08:59:29.452250 containerd[2009]: time="2024-07-02T08:59:29.452174450Z" level=info msg="CreateContainer within sandbox \"07a7a8793175c507da1d2dae585ddf7c034811a5d3e256d11426535ad998e16d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b00d8430b3edac214db41b47c35ba2c86d36109839adeafb08693d8b64f52fa9\"" Jul 2 08:59:29.455619 containerd[2009]: time="2024-07-02T08:59:29.453524678Z" level=info msg="StartContainer for \"b00d8430b3edac214db41b47c35ba2c86d36109839adeafb08693d8b64f52fa9\"" Jul 2 08:59:29.507822 systemd[1]: Started cri-containerd-b00d8430b3edac214db41b47c35ba2c86d36109839adeafb08693d8b64f52fa9.scope - libcontainer container b00d8430b3edac214db41b47c35ba2c86d36109839adeafb08693d8b64f52fa9. Jul 2 08:59:29.560733 containerd[2009]: time="2024-07-02T08:59:29.560616830Z" level=info msg="StartContainer for \"b00d8430b3edac214db41b47c35ba2c86d36109839adeafb08693d8b64f52fa9\" returns successfully" Jul 2 08:59:30.186159 kubelet[3252]: E0702 08:59:30.184120 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-58jnw" podUID="854c111d-7e31-40e1-a3bc-810de7814240" Jul 2 08:59:30.629938 containerd[2009]: time="2024-07-02T08:59:30.629870092Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:59:30.633368 systemd[1]: cri-containerd-b00d8430b3edac214db41b47c35ba2c86d36109839adeafb08693d8b64f52fa9.scope: Deactivated successfully. Jul 2 08:59:30.669352 kubelet[3252]: I0702 08:59:30.666996 3252 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 08:59:30.692255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b00d8430b3edac214db41b47c35ba2c86d36109839adeafb08693d8b64f52fa9-rootfs.mount: Deactivated successfully. Jul 2 08:59:30.730933 kubelet[3252]: I0702 08:59:30.730873 3252 topology_manager.go:215] "Topology Admit Handler" podUID="04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9" podNamespace="kube-system" podName="coredns-76f75df574-prdlg" Jul 2 08:59:30.733132 kubelet[3252]: I0702 08:59:30.733070 3252 topology_manager.go:215] "Topology Admit Handler" podUID="69284943-ceac-4a72-9f41-ad6e22e1a962" podNamespace="kube-system" podName="coredns-76f75df574-9d8vj" Jul 2 08:59:30.737194 kubelet[3252]: I0702 08:59:30.737130 3252 topology_manager.go:215] "Topology Admit Handler" podUID="eaf1a623-0ff8-41cc-be6c-c8208e0fe27d" podNamespace="calico-system" podName="calico-kube-controllers-599895b6cb-pdhbb" Jul 2 08:59:30.755296 systemd[1]: Created slice kubepods-burstable-pod04168a1b_6f74_4de0_b54f_d7ae7ddfc2f9.slice - libcontainer container kubepods-burstable-pod04168a1b_6f74_4de0_b54f_d7ae7ddfc2f9.slice. Jul 2 08:59:30.772036 systemd[1]: Created slice kubepods-burstable-pod69284943_ceac_4a72_9f41_ad6e22e1a962.slice - libcontainer container kubepods-burstable-pod69284943_ceac_4a72_9f41_ad6e22e1a962.slice. Jul 2 08:59:30.792252 systemd[1]: Created slice kubepods-besteffort-podeaf1a623_0ff8_41cc_be6c_c8208e0fe27d.slice - libcontainer container kubepods-besteffort-podeaf1a623_0ff8_41cc_be6c_c8208e0fe27d.slice. Jul 2 08:59:30.843522 kubelet[3252]: I0702 08:59:30.843409 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2j8t\" (UniqueName: \"kubernetes.io/projected/04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9-kube-api-access-p2j8t\") pod \"coredns-76f75df574-prdlg\" (UID: \"04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9\") " pod="kube-system/coredns-76f75df574-prdlg" Jul 2 08:59:30.844059 kubelet[3252]: I0702 08:59:30.843635 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69284943-ceac-4a72-9f41-ad6e22e1a962-config-volume\") pod \"coredns-76f75df574-9d8vj\" (UID: \"69284943-ceac-4a72-9f41-ad6e22e1a962\") " pod="kube-system/coredns-76f75df574-9d8vj" Jul 2 08:59:30.844264 kubelet[3252]: I0702 08:59:30.844178 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxtms\" (UniqueName: \"kubernetes.io/projected/eaf1a623-0ff8-41cc-be6c-c8208e0fe27d-kube-api-access-lxtms\") pod \"calico-kube-controllers-599895b6cb-pdhbb\" (UID: \"eaf1a623-0ff8-41cc-be6c-c8208e0fe27d\") " pod="calico-system/calico-kube-controllers-599895b6cb-pdhbb" Jul 2 08:59:30.844441 kubelet[3252]: I0702 08:59:30.844378 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg2gv\" (UniqueName: \"kubernetes.io/projected/69284943-ceac-4a72-9f41-ad6e22e1a962-kube-api-access-wg2gv\") pod \"coredns-76f75df574-9d8vj\" (UID: \"69284943-ceac-4a72-9f41-ad6e22e1a962\") " pod="kube-system/coredns-76f75df574-9d8vj" Jul 2 08:59:30.844723 kubelet[3252]: I0702 08:59:30.844588 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9-config-volume\") pod \"coredns-76f75df574-prdlg\" (UID: \"04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9\") " pod="kube-system/coredns-76f75df574-prdlg" Jul 2 08:59:30.844723 kubelet[3252]: I0702 08:59:30.844680 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaf1a623-0ff8-41cc-be6c-c8208e0fe27d-tigera-ca-bundle\") pod \"calico-kube-controllers-599895b6cb-pdhbb\" (UID: \"eaf1a623-0ff8-41cc-be6c-c8208e0fe27d\") " pod="calico-system/calico-kube-controllers-599895b6cb-pdhbb" Jul 2 08:59:31.065906 containerd[2009]: time="2024-07-02T08:59:31.065741882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-prdlg,Uid:04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9,Namespace:kube-system,Attempt:0,}" Jul 2 08:59:31.088026 containerd[2009]: time="2024-07-02T08:59:31.087630734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9d8vj,Uid:69284943-ceac-4a72-9f41-ad6e22e1a962,Namespace:kube-system,Attempt:0,}" Jul 2 08:59:31.098675 containerd[2009]: time="2024-07-02T08:59:31.098269718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-599895b6cb-pdhbb,Uid:eaf1a623-0ff8-41cc-be6c-c8208e0fe27d,Namespace:calico-system,Attempt:0,}" Jul 2 08:59:31.406636 containerd[2009]: time="2024-07-02T08:59:31.406378048Z" level=info msg="shim disconnected" id=b00d8430b3edac214db41b47c35ba2c86d36109839adeafb08693d8b64f52fa9 namespace=k8s.io Jul 2 08:59:31.406636 containerd[2009]: time="2024-07-02T08:59:31.406454644Z" level=warning msg="cleaning up after shim disconnected" id=b00d8430b3edac214db41b47c35ba2c86d36109839adeafb08693d8b64f52fa9 namespace=k8s.io Jul 2 08:59:31.406636 containerd[2009]: time="2024-07-02T08:59:31.406476748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:59:31.455510 containerd[2009]: time="2024-07-02T08:59:31.454906732Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:59:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 08:59:31.615231 containerd[2009]: time="2024-07-02T08:59:31.615002069Z" level=error msg="Failed to destroy network for sandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:31.616090 containerd[2009]: time="2024-07-02T08:59:31.616033469Z" level=error msg="encountered an error cleaning up failed sandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:31.616566 containerd[2009]: time="2024-07-02T08:59:31.616394765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9d8vj,Uid:69284943-ceac-4a72-9f41-ad6e22e1a962,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:31.617703 kubelet[3252]: E0702 08:59:31.617135 3252 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:31.617703 kubelet[3252]: E0702 08:59:31.617226 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9d8vj" Jul 2 08:59:31.617703 kubelet[3252]: E0702 08:59:31.617267 3252 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9d8vj" Jul 2 08:59:31.618435 kubelet[3252]: E0702 08:59:31.617347 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-9d8vj_kube-system(69284943-ceac-4a72-9f41-ad6e22e1a962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-9d8vj_kube-system(69284943-ceac-4a72-9f41-ad6e22e1a962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9d8vj" podUID="69284943-ceac-4a72-9f41-ad6e22e1a962" Jul 2 08:59:31.636958 containerd[2009]: time="2024-07-02T08:59:31.636887741Z" level=error msg="Failed to destroy network for sandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:31.638241 containerd[2009]: time="2024-07-02T08:59:31.638179061Z" level=error msg="encountered an error cleaning up failed sandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:31.639469 containerd[2009]: time="2024-07-02T08:59:31.638284085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-prdlg,Uid:04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:31.639779 kubelet[3252]: E0702 08:59:31.638651 3252 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:31.639779 kubelet[3252]: E0702 08:59:31.638722 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-prdlg" Jul 2 08:59:31.639779 kubelet[3252]: E0702 08:59:31.638761 3252 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-prdlg" Jul 2 08:59:31.639988 kubelet[3252]: E0702 08:59:31.638836 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-prdlg_kube-system(04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-prdlg_kube-system(04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-prdlg" podUID="04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9" Jul 2 08:59:31.641673 containerd[2009]: time="2024-07-02T08:59:31.641392205Z" level=error msg="Failed to destroy network for sandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:31.642704 containerd[2009]: time="2024-07-02T08:59:31.642629429Z" level=error msg="encountered an error cleaning up failed sandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:31.643663 containerd[2009]: time="2024-07-02T08:59:31.642725057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-599895b6cb-pdhbb,Uid:eaf1a623-0ff8-41cc-be6c-c8208e0fe27d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:31.644594 kubelet[3252]: E0702 08:59:31.643321 3252 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:31.644594 kubelet[3252]: E0702 08:59:31.643398 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-599895b6cb-pdhbb" Jul 2 08:59:31.644594 kubelet[3252]: E0702 08:59:31.643439 3252 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-599895b6cb-pdhbb" Jul 2 08:59:31.644833 kubelet[3252]: E0702 08:59:31.643563 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-599895b6cb-pdhbb_calico-system(eaf1a623-0ff8-41cc-be6c-c8208e0fe27d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-599895b6cb-pdhbb_calico-system(eaf1a623-0ff8-41cc-be6c-c8208e0fe27d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-599895b6cb-pdhbb" podUID="eaf1a623-0ff8-41cc-be6c-c8208e0fe27d" Jul 2 08:59:31.687993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c-shm.mount: Deactivated successfully. Jul 2 08:59:32.195114 systemd[1]: Created slice kubepods-besteffort-pod854c111d_7e31_40e1_a3bc_810de7814240.slice - libcontainer container kubepods-besteffort-pod854c111d_7e31_40e1_a3bc_810de7814240.slice. Jul 2 08:59:32.200017 containerd[2009]: time="2024-07-02T08:59:32.199963360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-58jnw,Uid:854c111d-7e31-40e1-a3bc-810de7814240,Namespace:calico-system,Attempt:0,}" Jul 2 08:59:32.314751 containerd[2009]: time="2024-07-02T08:59:32.314544676Z" level=error msg="Failed to destroy network for sandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:32.317111 containerd[2009]: time="2024-07-02T08:59:32.316981876Z" level=error msg="encountered an error cleaning up failed sandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:32.320645 containerd[2009]: time="2024-07-02T08:59:32.317106760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-58jnw,Uid:854c111d-7e31-40e1-a3bc-810de7814240,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:32.320761 kubelet[3252]: E0702 08:59:32.317438 3252 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:32.320761 kubelet[3252]: E0702 08:59:32.317530 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-58jnw" Jul 2 08:59:32.320761 kubelet[3252]: E0702 08:59:32.317577 3252 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-58jnw" Jul 2 08:59:32.320939 kubelet[3252]: E0702 08:59:32.317656 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-58jnw_calico-system(854c111d-7e31-40e1-a3bc-810de7814240)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-58jnw_calico-system(854c111d-7e31-40e1-a3bc-810de7814240)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-58jnw" podUID="854c111d-7e31-40e1-a3bc-810de7814240" Jul 2 08:59:32.323042 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd-shm.mount: Deactivated successfully. Jul 2 08:59:32.431367 kubelet[3252]: I0702 08:59:32.431276 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:32.434893 containerd[2009]: time="2024-07-02T08:59:32.432758489Z" level=info msg="StopPodSandbox for \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\"" Jul 2 08:59:32.434893 containerd[2009]: time="2024-07-02T08:59:32.433171145Z" level=info msg="Ensure that sandbox 82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e in task-service has been cleanup successfully" Jul 2 08:59:32.437504 kubelet[3252]: I0702 08:59:32.436791 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:32.439116 containerd[2009]: time="2024-07-02T08:59:32.438524969Z" level=info msg="StopPodSandbox for \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\"" Jul 2 08:59:32.442603 containerd[2009]: time="2024-07-02T08:59:32.442473749Z" level=info msg="Ensure that sandbox 77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e in task-service has been cleanup successfully" Jul 2 08:59:32.447231 kubelet[3252]: I0702 08:59:32.446392 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:32.452389 containerd[2009]: time="2024-07-02T08:59:32.452082821Z" level=info msg="StopPodSandbox for \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\"" Jul 2 08:59:32.457969 containerd[2009]: time="2024-07-02T08:59:32.457889933Z" level=info msg="Ensure that sandbox 67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c in task-service has been cleanup successfully" Jul 2 08:59:32.484871 containerd[2009]: time="2024-07-02T08:59:32.484814813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 08:59:32.493522 kubelet[3252]: I0702 08:59:32.492399 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:32.494893 containerd[2009]: time="2024-07-02T08:59:32.494582921Z" level=info msg="StopPodSandbox for \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\"" Jul 2 08:59:32.500898 containerd[2009]: time="2024-07-02T08:59:32.500823941Z" level=info msg="Ensure that sandbox 0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd in task-service has been cleanup successfully" Jul 2 08:59:32.586041 containerd[2009]: time="2024-07-02T08:59:32.585955613Z" level=error msg="StopPodSandbox for \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\" failed" error="failed to destroy network for sandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:32.587563 kubelet[3252]: E0702 08:59:32.586587 3252 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:32.587563 kubelet[3252]: E0702 08:59:32.586743 3252 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e"} Jul 2 08:59:32.587563 kubelet[3252]: E0702 08:59:32.586836 3252 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69284943-ceac-4a72-9f41-ad6e22e1a962\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:59:32.587563 kubelet[3252]: E0702 08:59:32.586915 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69284943-ceac-4a72-9f41-ad6e22e1a962\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9d8vj" podUID="69284943-ceac-4a72-9f41-ad6e22e1a962" Jul 2 08:59:32.592432 containerd[2009]: time="2024-07-02T08:59:32.592245521Z" level=error msg="StopPodSandbox for \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\" failed" error="failed to destroy network for sandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:32.592992 kubelet[3252]: E0702 08:59:32.592958 3252 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:32.593292 kubelet[3252]: E0702 08:59:32.593265 3252 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e"} Jul 2 08:59:32.594737 kubelet[3252]: E0702 08:59:32.594582 3252 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eaf1a623-0ff8-41cc-be6c-c8208e0fe27d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:59:32.594737 kubelet[3252]: E0702 08:59:32.594688 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eaf1a623-0ff8-41cc-be6c-c8208e0fe27d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-599895b6cb-pdhbb" podUID="eaf1a623-0ff8-41cc-be6c-c8208e0fe27d" Jul 2 08:59:32.609864 containerd[2009]: time="2024-07-02T08:59:32.609762234Z" level=error msg="StopPodSandbox for \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\" failed" error="failed to destroy network for sandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:32.610429 kubelet[3252]: E0702 08:59:32.610396 3252 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:32.610789 kubelet[3252]: E0702 08:59:32.610751 3252 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c"} Jul 2 08:59:32.610993 kubelet[3252]: E0702 08:59:32.610970 3252 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:59:32.611222 kubelet[3252]: E0702 08:59:32.611101 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-prdlg" podUID="04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9" Jul 2 08:59:32.613861 containerd[2009]: time="2024-07-02T08:59:32.613791714Z" level=error msg="StopPodSandbox for \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\" failed" error="failed to destroy network for sandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:59:32.614210 kubelet[3252]: E0702 08:59:32.614127 3252 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:32.614210 kubelet[3252]: E0702 08:59:32.614194 3252 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd"} Jul 2 08:59:32.614356 kubelet[3252]: E0702 08:59:32.614257 3252 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"854c111d-7e31-40e1-a3bc-810de7814240\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:59:32.614356 kubelet[3252]: E0702 08:59:32.614307 3252 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"854c111d-7e31-40e1-a3bc-810de7814240\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-58jnw" podUID="854c111d-7e31-40e1-a3bc-810de7814240" Jul 2 08:59:37.824166 systemd[1]: Started sshd@7-172.31.30.27:22-147.75.109.163:51320.service - OpenSSH per-connection server daemon (147.75.109.163:51320). Jul 2 08:59:38.076789 sshd[4406]: Accepted publickey for core from 147.75.109.163 port 51320 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:38.082861 sshd[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:38.095051 systemd-logind[1993]: New session 8 of user core. Jul 2 08:59:38.102823 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 08:59:38.407780 sshd[4406]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:38.418957 systemd[1]: sshd@7-172.31.30.27:22-147.75.109.163:51320.service: Deactivated successfully. Jul 2 08:59:38.427434 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:59:38.430953 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:59:38.434317 systemd-logind[1993]: Removed session 8. Jul 2 08:59:38.474621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389839163.mount: Deactivated successfully. Jul 2 08:59:38.542862 containerd[2009]: time="2024-07-02T08:59:38.542535119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:38.545350 containerd[2009]: time="2024-07-02T08:59:38.545279219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 08:59:38.547723 containerd[2009]: time="2024-07-02T08:59:38.547657043Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:38.555683 containerd[2009]: time="2024-07-02T08:59:38.555514235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:38.558469 containerd[2009]: time="2024-07-02T08:59:38.558407795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 6.073301646s" Jul 2 08:59:38.559529 containerd[2009]: time="2024-07-02T08:59:38.558675623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 08:59:38.586428 containerd[2009]: time="2024-07-02T08:59:38.586365335Z" level=info msg="CreateContainer within sandbox \"07a7a8793175c507da1d2dae585ddf7c034811a5d3e256d11426535ad998e16d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 08:59:38.614791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3254332106.mount: Deactivated successfully. Jul 2 08:59:38.617195 containerd[2009]: time="2024-07-02T08:59:38.616145423Z" level=info msg="CreateContainer within sandbox \"07a7a8793175c507da1d2dae585ddf7c034811a5d3e256d11426535ad998e16d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"da59310792d36d333ffcd8b7ec93d045e5dcb081d6d6d9642bc080f6d3f36b89\"" Jul 2 08:59:38.617195 containerd[2009]: time="2024-07-02T08:59:38.616991603Z" level=info msg="StartContainer for \"da59310792d36d333ffcd8b7ec93d045e5dcb081d6d6d9642bc080f6d3f36b89\"" Jul 2 08:59:38.663799 systemd[1]: Started cri-containerd-da59310792d36d333ffcd8b7ec93d045e5dcb081d6d6d9642bc080f6d3f36b89.scope - libcontainer container da59310792d36d333ffcd8b7ec93d045e5dcb081d6d6d9642bc080f6d3f36b89. Jul 2 08:59:38.720409 containerd[2009]: time="2024-07-02T08:59:38.720271404Z" level=info msg="StartContainer for \"da59310792d36d333ffcd8b7ec93d045e5dcb081d6d6d9642bc080f6d3f36b89\" returns successfully" Jul 2 08:59:38.849806 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 08:59:38.849927 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 08:59:39.564179 kubelet[3252]: I0702 08:59:39.563329 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-vq9tx" podStartSLOduration=2.618458959 podStartE2EDuration="20.56326512s" podCreationTimestamp="2024-07-02 08:59:19 +0000 UTC" firstStartedPulling="2024-07-02 08:59:20.614138046 +0000 UTC m=+22.669482018" lastFinishedPulling="2024-07-02 08:59:38.558944219 +0000 UTC m=+40.614288179" observedRunningTime="2024-07-02 08:59:39.561385884 +0000 UTC m=+41.616729880" watchObservedRunningTime="2024-07-02 08:59:39.56326512 +0000 UTC m=+41.618609104" Jul 2 08:59:41.397607 (udev-worker)[4458]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:59:41.399748 systemd-networkd[1850]: vxlan.calico: Link UP Jul 2 08:59:41.399759 systemd-networkd[1850]: vxlan.calico: Gained carrier Jul 2 08:59:41.435595 (udev-worker)[4461]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:59:43.438735 systemd-networkd[1850]: vxlan.calico: Gained IPv6LL Jul 2 08:59:43.448990 systemd[1]: Started sshd@8-172.31.30.27:22-147.75.109.163:41648.service - OpenSSH per-connection server daemon (147.75.109.163:41648). Jul 2 08:59:43.627366 sshd[4723]: Accepted publickey for core from 147.75.109.163 port 41648 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:43.630462 sshd[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:43.639588 systemd-logind[1993]: New session 9 of user core. Jul 2 08:59:43.648815 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 08:59:43.898769 sshd[4723]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:43.906172 systemd[1]: sshd@8-172.31.30.27:22-147.75.109.163:41648.service: Deactivated successfully. Jul 2 08:59:43.910410 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:59:43.912344 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:59:43.915250 systemd-logind[1993]: Removed session 9. Jul 2 08:59:45.184415 containerd[2009]: time="2024-07-02T08:59:45.184327864Z" level=info msg="StopPodSandbox for \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\"" Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.288 [INFO][4749] k8s.go 608: Cleaning up netns ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.288 [INFO][4749] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" iface="eth0" netns="/var/run/netns/cni-7b72b53f-d888-4044-5a8b-ba4b9066c2ee" Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.289 [INFO][4749] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" iface="eth0" netns="/var/run/netns/cni-7b72b53f-d888-4044-5a8b-ba4b9066c2ee" Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.290 [INFO][4749] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" iface="eth0" netns="/var/run/netns/cni-7b72b53f-d888-4044-5a8b-ba4b9066c2ee" Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.290 [INFO][4749] k8s.go 615: Releasing IP address(es) ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.290 [INFO][4749] utils.go 188: Calico CNI releasing IP address ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.348 [INFO][4755] ipam_plugin.go 411: Releasing address using handleID ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" HandleID="k8s-pod-network.0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Workload="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.348 [INFO][4755] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.349 [INFO][4755] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.363 [WARNING][4755] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" HandleID="k8s-pod-network.0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Workload="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.364 [INFO][4755] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" HandleID="k8s-pod-network.0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Workload="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.367 [INFO][4755] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:45.373304 containerd[2009]: 2024-07-02 08:59:45.370 [INFO][4749] k8s.go 621: Teardown processing complete. ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:45.379385 containerd[2009]: time="2024-07-02T08:59:45.375671693Z" level=info msg="TearDown network for sandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\" successfully" Jul 2 08:59:45.379385 containerd[2009]: time="2024-07-02T08:59:45.375717941Z" level=info msg="StopPodSandbox for \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\" returns successfully" Jul 2 08:59:45.379385 containerd[2009]: time="2024-07-02T08:59:45.377304161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-58jnw,Uid:854c111d-7e31-40e1-a3bc-810de7814240,Namespace:calico-system,Attempt:1,}" Jul 2 08:59:45.382776 systemd[1]: run-netns-cni\x2d7b72b53f\x2dd888\x2d4044\x2d5a8b\x2dba4b9066c2ee.mount: Deactivated successfully. Jul 2 08:59:45.668144 systemd-networkd[1850]: cali9f898830a19: Link UP Jul 2 08:59:45.669835 systemd-networkd[1850]: cali9f898830a19: Gained carrier Jul 2 08:59:45.684195 (udev-worker)[4782]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.519 [INFO][4764] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0 csi-node-driver- calico-system 854c111d-7e31-40e1-a3bc-810de7814240 776 0 2024-07-02 08:59:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-30-27 csi-node-driver-58jnw eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali9f898830a19 [] []}} ContainerID="0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" Namespace="calico-system" Pod="csi-node-driver-58jnw" WorkloadEndpoint="ip--172--31--30--27-k8s-csi--node--driver--58jnw-" Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.520 [INFO][4764] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" Namespace="calico-system" Pod="csi-node-driver-58jnw" WorkloadEndpoint="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.578 [INFO][4775] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" HandleID="k8s-pod-network.0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" Workload="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.600 [INFO][4775] ipam_plugin.go 264: Auto assigning IP ContainerID="0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" HandleID="k8s-pod-network.0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" Workload="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebe40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-27", "pod":"csi-node-driver-58jnw", "timestamp":"2024-07-02 08:59:45.578511918 +0000 UTC"}, Hostname:"ip-172-31-30-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.601 [INFO][4775] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.601 [INFO][4775] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.601 [INFO][4775] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-27' Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.604 [INFO][4775] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" host="ip-172-31-30-27" Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.617 [INFO][4775] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-27" Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.626 [INFO][4775] ipam.go 489: Trying affinity for 192.168.12.128/26 host="ip-172-31-30-27" Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.630 [INFO][4775] ipam.go 155: Attempting to load block cidr=192.168.12.128/26 host="ip-172-31-30-27" Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.634 [INFO][4775] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ip-172-31-30-27" Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.635 [INFO][4775] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" host="ip-172-31-30-27" Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.637 [INFO][4775] ipam.go 1685: Creating new handle: k8s-pod-network.0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.643 [INFO][4775] ipam.go 1203: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" host="ip-172-31-30-27" Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.656 [INFO][4775] ipam.go 1216: Successfully claimed IPs: [192.168.12.129/26] block=192.168.12.128/26 handle="k8s-pod-network.0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" host="ip-172-31-30-27" Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.656 [INFO][4775] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.129/26] handle="k8s-pod-network.0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" host="ip-172-31-30-27" Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.656 [INFO][4775] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:45.708944 containerd[2009]: 2024-07-02 08:59:45.656 [INFO][4775] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.12.129/26] IPv6=[] ContainerID="0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" HandleID="k8s-pod-network.0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" Workload="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:45.713657 containerd[2009]: 2024-07-02 08:59:45.662 [INFO][4764] k8s.go 386: Populated endpoint ContainerID="0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" Namespace="calico-system" Pod="csi-node-driver-58jnw" WorkloadEndpoint="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"854c111d-7e31-40e1-a3bc-810de7814240", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"", Pod:"csi-node-driver-58jnw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9f898830a19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:45.713657 containerd[2009]: 2024-07-02 08:59:45.662 [INFO][4764] k8s.go 387: Calico CNI using IPs: [192.168.12.129/32] ContainerID="0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" Namespace="calico-system" Pod="csi-node-driver-58jnw" WorkloadEndpoint="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:45.713657 containerd[2009]: 2024-07-02 08:59:45.662 [INFO][4764] dataplane_linux.go 68: Setting the host side veth name to cali9f898830a19 ContainerID="0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" Namespace="calico-system" Pod="csi-node-driver-58jnw" WorkloadEndpoint="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:45.713657 containerd[2009]: 2024-07-02 08:59:45.670 [INFO][4764] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" Namespace="calico-system" Pod="csi-node-driver-58jnw" WorkloadEndpoint="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:45.713657 containerd[2009]: 2024-07-02 08:59:45.672 [INFO][4764] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" Namespace="calico-system" Pod="csi-node-driver-58jnw" WorkloadEndpoint="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"854c111d-7e31-40e1-a3bc-810de7814240", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca", Pod:"csi-node-driver-58jnw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9f898830a19", MAC:"36:14:ac:4d:02:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:45.713657 containerd[2009]: 2024-07-02 08:59:45.701 [INFO][4764] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca" Namespace="calico-system" Pod="csi-node-driver-58jnw" WorkloadEndpoint="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:45.776351 containerd[2009]: time="2024-07-02T08:59:45.774584455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:45.776351 containerd[2009]: time="2024-07-02T08:59:45.774691303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:45.776351 containerd[2009]: time="2024-07-02T08:59:45.774740527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:45.776351 containerd[2009]: time="2024-07-02T08:59:45.774775519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:45.823017 systemd[1]: Started cri-containerd-0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca.scope - libcontainer container 0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca. Jul 2 08:59:45.876763 containerd[2009]: time="2024-07-02T08:59:45.876628591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-58jnw,Uid:854c111d-7e31-40e1-a3bc-810de7814240,Namespace:calico-system,Attempt:1,} returns sandbox id \"0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca\"" Jul 2 08:59:45.881521 containerd[2009]: time="2024-07-02T08:59:45.881186719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 08:59:46.186288 containerd[2009]: time="2024-07-02T08:59:46.186126185Z" level=info msg="StopPodSandbox for \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\"" Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.280 [INFO][4848] k8s.go 608: Cleaning up netns ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.280 [INFO][4848] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" iface="eth0" netns="/var/run/netns/cni-f75bd66c-52ce-459d-2f2b-b0c01ac73788" Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.281 [INFO][4848] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" iface="eth0" netns="/var/run/netns/cni-f75bd66c-52ce-459d-2f2b-b0c01ac73788" Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.281 [INFO][4848] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" iface="eth0" netns="/var/run/netns/cni-f75bd66c-52ce-459d-2f2b-b0c01ac73788" Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.282 [INFO][4848] k8s.go 615: Releasing IP address(es) ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.282 [INFO][4848] utils.go 188: Calico CNI releasing IP address ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.333 [INFO][4854] ipam_plugin.go 411: Releasing address using handleID ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" HandleID="k8s-pod-network.77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.333 [INFO][4854] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.333 [INFO][4854] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.347 [WARNING][4854] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" HandleID="k8s-pod-network.77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.347 [INFO][4854] ipam_plugin.go 439: Releasing address using workloadID ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" HandleID="k8s-pod-network.77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.349 [INFO][4854] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:46.355361 containerd[2009]: 2024-07-02 08:59:46.352 [INFO][4848] k8s.go 621: Teardown processing complete. ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:46.357229 containerd[2009]: time="2024-07-02T08:59:46.355750614Z" level=info msg="TearDown network for sandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\" successfully" Jul 2 08:59:46.357229 containerd[2009]: time="2024-07-02T08:59:46.355795758Z" level=info msg="StopPodSandbox for \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\" returns successfully" Jul 2 08:59:46.357440 containerd[2009]: time="2024-07-02T08:59:46.357275082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9d8vj,Uid:69284943-ceac-4a72-9f41-ad6e22e1a962,Namespace:kube-system,Attempt:1,}" Jul 2 08:59:46.381468 systemd[1]: run-netns-cni\x2df75bd66c\x2d52ce\x2d459d\x2d2f2b\x2db0c01ac73788.mount: Deactivated successfully. Jul 2 08:59:46.578798 systemd-networkd[1850]: calia6595643521: Link UP Jul 2 08:59:46.580813 systemd-networkd[1850]: calia6595643521: Gained carrier Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.446 [INFO][4864] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0 coredns-76f75df574- kube-system 69284943-ceac-4a72-9f41-ad6e22e1a962 785 0 2024-07-02 08:59:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-27 coredns-76f75df574-9d8vj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6595643521 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" Namespace="kube-system" Pod="coredns-76f75df574-9d8vj" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-" Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.446 [INFO][4864] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" Namespace="kube-system" Pod="coredns-76f75df574-9d8vj" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.501 [INFO][4872] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" HandleID="k8s-pod-network.39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.519 [INFO][4872] ipam_plugin.go 264: Auto assigning IP ContainerID="39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" HandleID="k8s-pod-network.39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ccc30), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-27", "pod":"coredns-76f75df574-9d8vj", "timestamp":"2024-07-02 08:59:46.501110443 +0000 UTC"}, Hostname:"ip-172-31-30-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.520 [INFO][4872] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.520 [INFO][4872] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.520 [INFO][4872] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-27' Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.523 [INFO][4872] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" host="ip-172-31-30-27" Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.531 [INFO][4872] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-27" Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.539 [INFO][4872] ipam.go 489: Trying affinity for 192.168.12.128/26 host="ip-172-31-30-27" Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.542 [INFO][4872] ipam.go 155: Attempting to load block cidr=192.168.12.128/26 host="ip-172-31-30-27" Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.546 [INFO][4872] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ip-172-31-30-27" Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.546 [INFO][4872] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" host="ip-172-31-30-27" Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.549 [INFO][4872] ipam.go 1685: Creating new handle: k8s-pod-network.39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277 Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.555 [INFO][4872] ipam.go 1203: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" host="ip-172-31-30-27" Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.568 [INFO][4872] ipam.go 1216: Successfully claimed IPs: [192.168.12.130/26] block=192.168.12.128/26 handle="k8s-pod-network.39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" host="ip-172-31-30-27" Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.568 [INFO][4872] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.130/26] handle="k8s-pod-network.39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" host="ip-172-31-30-27" Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.568 [INFO][4872] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:46.622922 containerd[2009]: 2024-07-02 08:59:46.569 [INFO][4872] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.12.130/26] IPv6=[] ContainerID="39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" HandleID="k8s-pod-network.39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:46.625217 containerd[2009]: 2024-07-02 08:59:46.573 [INFO][4864] k8s.go 386: Populated endpoint ContainerID="39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" Namespace="kube-system" Pod="coredns-76f75df574-9d8vj" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"69284943-ceac-4a72-9f41-ad6e22e1a962", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"", Pod:"coredns-76f75df574-9d8vj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6595643521", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:46.625217 containerd[2009]: 2024-07-02 08:59:46.573 [INFO][4864] k8s.go 387: Calico CNI using IPs: [192.168.12.130/32] ContainerID="39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" Namespace="kube-system" Pod="coredns-76f75df574-9d8vj" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:46.625217 containerd[2009]: 2024-07-02 08:59:46.573 [INFO][4864] dataplane_linux.go 68: Setting the host side veth name to calia6595643521 ContainerID="39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" Namespace="kube-system" Pod="coredns-76f75df574-9d8vj" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:46.625217 containerd[2009]: 2024-07-02 08:59:46.582 [INFO][4864] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" Namespace="kube-system" Pod="coredns-76f75df574-9d8vj" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:46.625217 containerd[2009]: 2024-07-02 08:59:46.584 [INFO][4864] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" Namespace="kube-system" Pod="coredns-76f75df574-9d8vj" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"69284943-ceac-4a72-9f41-ad6e22e1a962", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277", Pod:"coredns-76f75df574-9d8vj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6595643521", MAC:"52:00:f2:b0:51:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:46.625217 containerd[2009]: 2024-07-02 08:59:46.611 [INFO][4864] k8s.go 500: Wrote updated endpoint to datastore ContainerID="39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277" Namespace="kube-system" Pod="coredns-76f75df574-9d8vj" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:46.682421 containerd[2009]: time="2024-07-02T08:59:46.682014067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:46.682421 containerd[2009]: time="2024-07-02T08:59:46.682133479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:46.682993 containerd[2009]: time="2024-07-02T08:59:46.682217035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:46.682993 containerd[2009]: time="2024-07-02T08:59:46.682272055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:46.733097 systemd[1]: run-containerd-runc-k8s.io-39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277-runc.xmO3s1.mount: Deactivated successfully. Jul 2 08:59:46.748450 systemd[1]: Started cri-containerd-39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277.scope - libcontainer container 39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277. Jul 2 08:59:46.833255 containerd[2009]: time="2024-07-02T08:59:46.831007028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9d8vj,Uid:69284943-ceac-4a72-9f41-ad6e22e1a962,Namespace:kube-system,Attempt:1,} returns sandbox id \"39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277\"" Jul 2 08:59:46.839055 containerd[2009]: time="2024-07-02T08:59:46.838992932Z" level=info msg="CreateContainer within sandbox \"39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:59:46.870457 containerd[2009]: time="2024-07-02T08:59:46.870281072Z" level=info msg="CreateContainer within sandbox \"39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"129882e3abfde94397eb632565496999be8ef0824829d770fb66a8fadb5845d6\"" Jul 2 08:59:46.873570 containerd[2009]: time="2024-07-02T08:59:46.872798960Z" level=info msg="StartContainer for \"129882e3abfde94397eb632565496999be8ef0824829d770fb66a8fadb5845d6\"" Jul 2 08:59:46.927987 systemd[1]: Started cri-containerd-129882e3abfde94397eb632565496999be8ef0824829d770fb66a8fadb5845d6.scope - libcontainer container 129882e3abfde94397eb632565496999be8ef0824829d770fb66a8fadb5845d6. Jul 2 08:59:47.000666 containerd[2009]: time="2024-07-02T08:59:47.000592985Z" level=info msg="StartContainer for \"129882e3abfde94397eb632565496999be8ef0824829d770fb66a8fadb5845d6\" returns successfully" Jul 2 08:59:47.186436 containerd[2009]: time="2024-07-02T08:59:47.185825862Z" level=info msg="StopPodSandbox for \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\"" Jul 2 08:59:47.228647 containerd[2009]: time="2024-07-02T08:59:47.228530538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:47.231223 containerd[2009]: time="2024-07-02T08:59:47.231129114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 08:59:47.233305 containerd[2009]: time="2024-07-02T08:59:47.233213442Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:47.243539 containerd[2009]: time="2024-07-02T08:59:47.242747490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:47.245897 containerd[2009]: time="2024-07-02T08:59:47.245835414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.364581279s" Jul 2 08:59:47.246224 containerd[2009]: time="2024-07-02T08:59:47.246188394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 08:59:47.255501 containerd[2009]: time="2024-07-02T08:59:47.255410334Z" level=info msg="CreateContainer within sandbox \"0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 08:59:47.297614 containerd[2009]: time="2024-07-02T08:59:47.297539419Z" level=info msg="CreateContainer within sandbox \"0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ea758d82dc5bd91deac7da7ecb659298006b13b359e37893991bd4f6d423672a\"" Jul 2 08:59:47.300155 containerd[2009]: time="2024-07-02T08:59:47.298477327Z" level=info msg="StartContainer for \"ea758d82dc5bd91deac7da7ecb659298006b13b359e37893991bd4f6d423672a\"" Jul 2 08:59:47.407879 systemd[1]: Started cri-containerd-ea758d82dc5bd91deac7da7ecb659298006b13b359e37893991bd4f6d423672a.scope - libcontainer container ea758d82dc5bd91deac7da7ecb659298006b13b359e37893991bd4f6d423672a. Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.311 [INFO][4994] k8s.go 608: Cleaning up netns ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.311 [INFO][4994] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" iface="eth0" netns="/var/run/netns/cni-6c01fabb-946d-eb32-11e8-f60c9633a474" Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.312 [INFO][4994] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" iface="eth0" netns="/var/run/netns/cni-6c01fabb-946d-eb32-11e8-f60c9633a474" Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.313 [INFO][4994] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" iface="eth0" netns="/var/run/netns/cni-6c01fabb-946d-eb32-11e8-f60c9633a474" Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.313 [INFO][4994] k8s.go 615: Releasing IP address(es) ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.314 [INFO][4994] utils.go 188: Calico CNI releasing IP address ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.390 [INFO][5005] ipam_plugin.go 411: Releasing address using handleID ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" HandleID="k8s-pod-network.67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.391 [INFO][5005] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.391 [INFO][5005] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.410 [WARNING][5005] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" HandleID="k8s-pod-network.67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.411 [INFO][5005] ipam_plugin.go 439: Releasing address using workloadID ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" HandleID="k8s-pod-network.67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.414 [INFO][5005] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:47.420289 containerd[2009]: 2024-07-02 08:59:47.417 [INFO][4994] k8s.go 621: Teardown processing complete. ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:47.426885 containerd[2009]: time="2024-07-02T08:59:47.423636907Z" level=info msg="TearDown network for sandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\" successfully" Jul 2 08:59:47.426885 containerd[2009]: time="2024-07-02T08:59:47.423692779Z" level=info msg="StopPodSandbox for \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\" returns successfully" Jul 2 08:59:47.426150 systemd[1]: run-netns-cni\x2d6c01fabb\x2d946d\x2deb32\x2d11e8\x2df60c9633a474.mount: Deactivated successfully. Jul 2 08:59:47.427237 containerd[2009]: time="2024-07-02T08:59:47.427125883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-prdlg,Uid:04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9,Namespace:kube-system,Attempt:1,}" Jul 2 08:59:47.510363 containerd[2009]: time="2024-07-02T08:59:47.506848328Z" level=info msg="StartContainer for \"ea758d82dc5bd91deac7da7ecb659298006b13b359e37893991bd4f6d423672a\" returns successfully" Jul 2 08:59:47.510363 containerd[2009]: time="2024-07-02T08:59:47.509857280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 08:59:47.599668 systemd-networkd[1850]: cali9f898830a19: Gained IPv6LL Jul 2 08:59:47.605316 kubelet[3252]: I0702 08:59:47.601071 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9d8vj" podStartSLOduration=36.601014404 podStartE2EDuration="36.601014404s" podCreationTimestamp="2024-07-02 08:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:59:47.60065534 +0000 UTC m=+49.655999312" watchObservedRunningTime="2024-07-02 08:59:47.601014404 +0000 UTC m=+49.656358388" Jul 2 08:59:47.767560 systemd-networkd[1850]: calibc6ab919ad1: Link UP Jul 2 08:59:47.770187 systemd-networkd[1850]: calibc6ab919ad1: Gained carrier Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.558 [INFO][5037] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0 coredns-76f75df574- kube-system 04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9 796 0 2024-07-02 08:59:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-27 coredns-76f75df574-prdlg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibc6ab919ad1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" Namespace="kube-system" Pod="coredns-76f75df574-prdlg" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-" Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.559 [INFO][5037] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" Namespace="kube-system" Pod="coredns-76f75df574-prdlg" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.663 [INFO][5055] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" HandleID="k8s-pod-network.64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.696 [INFO][5055] ipam_plugin.go 264: Auto assigning IP ContainerID="64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" HandleID="k8s-pod-network.64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000420000), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-27", "pod":"coredns-76f75df574-prdlg", "timestamp":"2024-07-02 08:59:47.66263264 +0000 UTC"}, Hostname:"ip-172-31-30-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.696 [INFO][5055] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.697 [INFO][5055] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.697 [INFO][5055] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-27' Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.702 [INFO][5055] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" host="ip-172-31-30-27" Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.718 [INFO][5055] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-27" Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.729 [INFO][5055] ipam.go 489: Trying affinity for 192.168.12.128/26 host="ip-172-31-30-27" Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.732 [INFO][5055] ipam.go 155: Attempting to load block cidr=192.168.12.128/26 host="ip-172-31-30-27" Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.736 [INFO][5055] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ip-172-31-30-27" Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.736 [INFO][5055] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" host="ip-172-31-30-27" Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.740 [INFO][5055] ipam.go 1685: Creating new handle: k8s-pod-network.64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.749 [INFO][5055] ipam.go 1203: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" host="ip-172-31-30-27" Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.757 [INFO][5055] ipam.go 1216: Successfully claimed IPs: [192.168.12.131/26] block=192.168.12.128/26 handle="k8s-pod-network.64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" host="ip-172-31-30-27" Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.757 [INFO][5055] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.131/26] handle="k8s-pod-network.64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" host="ip-172-31-30-27" Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.757 [INFO][5055] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:47.795036 containerd[2009]: 2024-07-02 08:59:47.757 [INFO][5055] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.12.131/26] IPv6=[] ContainerID="64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" HandleID="k8s-pod-network.64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:47.798146 containerd[2009]: 2024-07-02 08:59:47.761 [INFO][5037] k8s.go 386: Populated endpoint ContainerID="64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" Namespace="kube-system" Pod="coredns-76f75df574-prdlg" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"", Pod:"coredns-76f75df574-prdlg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc6ab919ad1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:47.798146 containerd[2009]: 2024-07-02 08:59:47.761 [INFO][5037] k8s.go 387: Calico CNI using IPs: [192.168.12.131/32] ContainerID="64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" Namespace="kube-system" Pod="coredns-76f75df574-prdlg" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:47.798146 containerd[2009]: 2024-07-02 08:59:47.761 [INFO][5037] dataplane_linux.go 68: Setting the host side veth name to calibc6ab919ad1 ContainerID="64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" Namespace="kube-system" Pod="coredns-76f75df574-prdlg" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:47.798146 containerd[2009]: 2024-07-02 08:59:47.770 [INFO][5037] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" Namespace="kube-system" Pod="coredns-76f75df574-prdlg" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:47.798146 containerd[2009]: 2024-07-02 08:59:47.771 [INFO][5037] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" Namespace="kube-system" Pod="coredns-76f75df574-prdlg" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b", Pod:"coredns-76f75df574-prdlg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc6ab919ad1", MAC:"22:8d:19:5b:9d:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:47.798146 containerd[2009]: 2024-07-02 08:59:47.787 [INFO][5037] k8s.go 500: Wrote updated endpoint to datastore ContainerID="64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b" Namespace="kube-system" Pod="coredns-76f75df574-prdlg" WorkloadEndpoint="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:47.853817 systemd-networkd[1850]: calia6595643521: Gained IPv6LL Jul 2 08:59:47.860958 containerd[2009]: time="2024-07-02T08:59:47.860777889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:47.861704 containerd[2009]: time="2024-07-02T08:59:47.861628773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:47.861976 containerd[2009]: time="2024-07-02T08:59:47.861862857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:47.862219 containerd[2009]: time="2024-07-02T08:59:47.861953109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:47.909807 systemd[1]: Started cri-containerd-64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b.scope - libcontainer container 64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b. Jul 2 08:59:47.981083 containerd[2009]: time="2024-07-02T08:59:47.981007390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-prdlg,Uid:04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9,Namespace:kube-system,Attempt:1,} returns sandbox id \"64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b\"" Jul 2 08:59:47.989071 containerd[2009]: time="2024-07-02T08:59:47.988634110Z" level=info msg="CreateContainer within sandbox \"64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:59:48.013629 containerd[2009]: time="2024-07-02T08:59:48.013466274Z" level=info msg="CreateContainer within sandbox \"64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"525200ee2133694f8815fed7d73f722c5b8700e0d841a955d93febab90fc04cf\"" Jul 2 08:59:48.015147 containerd[2009]: time="2024-07-02T08:59:48.015016830Z" level=info msg="StartContainer for \"525200ee2133694f8815fed7d73f722c5b8700e0d841a955d93febab90fc04cf\"" Jul 2 08:59:48.065780 systemd[1]: Started cri-containerd-525200ee2133694f8815fed7d73f722c5b8700e0d841a955d93febab90fc04cf.scope - libcontainer container 525200ee2133694f8815fed7d73f722c5b8700e0d841a955d93febab90fc04cf. Jul 2 08:59:48.117343 containerd[2009]: time="2024-07-02T08:59:48.117264271Z" level=info msg="StartContainer for \"525200ee2133694f8815fed7d73f722c5b8700e0d841a955d93febab90fc04cf\" returns successfully" Jul 2 08:59:48.188707 containerd[2009]: time="2024-07-02T08:59:48.188466079Z" level=info msg="StopPodSandbox for \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\"" Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.306 [INFO][5169] k8s.go 608: Cleaning up netns ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.307 [INFO][5169] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" iface="eth0" netns="/var/run/netns/cni-ccbc5d9c-f3e0-93ee-fe5d-7d8c57a573c5" Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.307 [INFO][5169] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" iface="eth0" netns="/var/run/netns/cni-ccbc5d9c-f3e0-93ee-fe5d-7d8c57a573c5" Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.308 [INFO][5169] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" iface="eth0" netns="/var/run/netns/cni-ccbc5d9c-f3e0-93ee-fe5d-7d8c57a573c5" Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.308 [INFO][5169] k8s.go 615: Releasing IP address(es) ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.309 [INFO][5169] utils.go 188: Calico CNI releasing IP address ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.354 [INFO][5175] ipam_plugin.go 411: Releasing address using handleID ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" HandleID="k8s-pod-network.82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Workload="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.354 [INFO][5175] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.355 [INFO][5175] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.372 [WARNING][5175] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" HandleID="k8s-pod-network.82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Workload="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.372 [INFO][5175] ipam_plugin.go 439: Releasing address using workloadID ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" HandleID="k8s-pod-network.82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Workload="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.377 [INFO][5175] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:48.391459 containerd[2009]: 2024-07-02 08:59:48.384 [INFO][5169] k8s.go 621: Teardown processing complete. ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:48.395205 containerd[2009]: time="2024-07-02T08:59:48.393205160Z" level=info msg="TearDown network for sandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\" successfully" Jul 2 08:59:48.395346 containerd[2009]: time="2024-07-02T08:59:48.395216384Z" level=info msg="StopPodSandbox for \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\" returns successfully" Jul 2 08:59:48.398713 systemd[1]: run-netns-cni\x2dccbc5d9c\x2df3e0\x2d93ee\x2dfe5d\x2d7d8c57a573c5.mount: Deactivated successfully. Jul 2 08:59:48.399200 containerd[2009]: time="2024-07-02T08:59:48.399112700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-599895b6cb-pdhbb,Uid:eaf1a623-0ff8-41cc-be6c-c8208e0fe27d,Namespace:calico-system,Attempt:1,}" Jul 2 08:59:48.689473 kubelet[3252]: I0702 08:59:48.689286 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-prdlg" podStartSLOduration=37.689223765 podStartE2EDuration="37.689223765s" podCreationTimestamp="2024-07-02 08:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:59:48.646598817 +0000 UTC m=+50.701942825" watchObservedRunningTime="2024-07-02 08:59:48.689223765 +0000 UTC m=+50.744567737" Jul 2 08:59:48.765108 systemd-networkd[1850]: calidcbf3e74a6c: Link UP Jul 2 08:59:48.767672 systemd-networkd[1850]: calidcbf3e74a6c: Gained carrier Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.518 [INFO][5181] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0 calico-kube-controllers-599895b6cb- calico-system eaf1a623-0ff8-41cc-be6c-c8208e0fe27d 823 0 2024-07-02 08:59:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:599895b6cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-27 calico-kube-controllers-599895b6cb-pdhbb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidcbf3e74a6c [] []}} ContainerID="a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" Namespace="calico-system" Pod="calico-kube-controllers-599895b6cb-pdhbb" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-" Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.519 [INFO][5181] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" Namespace="calico-system" Pod="calico-kube-controllers-599895b6cb-pdhbb" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.577 [INFO][5193] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" HandleID="k8s-pod-network.a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" Workload="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.629 [INFO][5193] ipam_plugin.go 264: Auto assigning IP ContainerID="a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" HandleID="k8s-pod-network.a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" Workload="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316250), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-27", "pod":"calico-kube-controllers-599895b6cb-pdhbb", "timestamp":"2024-07-02 08:59:48.577741485 +0000 UTC"}, Hostname:"ip-172-31-30-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.629 [INFO][5193] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.629 [INFO][5193] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.629 [INFO][5193] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-27' Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.634 [INFO][5193] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" host="ip-172-31-30-27" Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.656 [INFO][5193] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-27" Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.693 [INFO][5193] ipam.go 489: Trying affinity for 192.168.12.128/26 host="ip-172-31-30-27" Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.704 [INFO][5193] ipam.go 155: Attempting to load block cidr=192.168.12.128/26 host="ip-172-31-30-27" Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.718 [INFO][5193] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ip-172-31-30-27" Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.718 [INFO][5193] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" host="ip-172-31-30-27" Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.723 [INFO][5193] ipam.go 1685: Creating new handle: k8s-pod-network.a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290 Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.736 [INFO][5193] ipam.go 1203: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" host="ip-172-31-30-27" Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.747 [INFO][5193] ipam.go 1216: Successfully claimed IPs: [192.168.12.132/26] block=192.168.12.128/26 handle="k8s-pod-network.a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" host="ip-172-31-30-27" Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.748 [INFO][5193] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.132/26] handle="k8s-pod-network.a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" host="ip-172-31-30-27" Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.748 [INFO][5193] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:48.817257 containerd[2009]: 2024-07-02 08:59:48.748 [INFO][5193] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.12.132/26] IPv6=[] ContainerID="a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" HandleID="k8s-pod-network.a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" Workload="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:48.819267 containerd[2009]: 2024-07-02 08:59:48.752 [INFO][5181] k8s.go 386: Populated endpoint ContainerID="a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" Namespace="calico-system" Pod="calico-kube-controllers-599895b6cb-pdhbb" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0", GenerateName:"calico-kube-controllers-599895b6cb-", Namespace:"calico-system", SelfLink:"", UID:"eaf1a623-0ff8-41cc-be6c-c8208e0fe27d", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"599895b6cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"", Pod:"calico-kube-controllers-599895b6cb-pdhbb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidcbf3e74a6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:48.819267 containerd[2009]: 2024-07-02 08:59:48.752 [INFO][5181] k8s.go 387: Calico CNI using IPs: [192.168.12.132/32] ContainerID="a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" Namespace="calico-system" Pod="calico-kube-controllers-599895b6cb-pdhbb" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:48.819267 containerd[2009]: 2024-07-02 08:59:48.753 [INFO][5181] dataplane_linux.go 68: Setting the host side veth name to calidcbf3e74a6c ContainerID="a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" Namespace="calico-system" Pod="calico-kube-controllers-599895b6cb-pdhbb" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:48.819267 containerd[2009]: 2024-07-02 08:59:48.767 [INFO][5181] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" Namespace="calico-system" Pod="calico-kube-controllers-599895b6cb-pdhbb" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:48.819267 containerd[2009]: 2024-07-02 08:59:48.768 [INFO][5181] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" Namespace="calico-system" Pod="calico-kube-controllers-599895b6cb-pdhbb" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0", GenerateName:"calico-kube-controllers-599895b6cb-", Namespace:"calico-system", SelfLink:"", UID:"eaf1a623-0ff8-41cc-be6c-c8208e0fe27d", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"599895b6cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290", Pod:"calico-kube-controllers-599895b6cb-pdhbb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidcbf3e74a6c", MAC:"06:01:f4:ac:32:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:48.819267 containerd[2009]: 2024-07-02 08:59:48.811 [INFO][5181] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290" Namespace="calico-system" Pod="calico-kube-controllers-599895b6cb-pdhbb" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:48.894543 containerd[2009]: time="2024-07-02T08:59:48.890561326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:48.894543 containerd[2009]: time="2024-07-02T08:59:48.891571726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:48.894543 containerd[2009]: time="2024-07-02T08:59:48.891620506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:48.894543 containerd[2009]: time="2024-07-02T08:59:48.891664378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:48.941983 systemd-networkd[1850]: calibc6ab919ad1: Gained IPv6LL Jul 2 08:59:48.948051 systemd[1]: Started sshd@9-172.31.30.27:22-147.75.109.163:41656.service - OpenSSH per-connection server daemon (147.75.109.163:41656). Jul 2 08:59:48.981819 systemd[1]: Started cri-containerd-a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290.scope - libcontainer container a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290. Jul 2 08:59:49.096878 containerd[2009]: time="2024-07-02T08:59:49.096812467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-599895b6cb-pdhbb,Uid:eaf1a623-0ff8-41cc-be6c-c8208e0fe27d,Namespace:calico-system,Attempt:1,} returns sandbox id \"a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290\"" Jul 2 08:59:49.167117 sshd[5243]: Accepted publickey for core from 147.75.109.163 port 41656 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:49.171038 sshd[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:49.182219 systemd-logind[1993]: New session 10 of user core. Jul 2 08:59:49.188778 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 08:59:49.445929 sshd[5243]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:49.451158 systemd[1]: sshd@9-172.31.30.27:22-147.75.109.163:41656.service: Deactivated successfully. Jul 2 08:59:49.456181 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:59:49.459860 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:59:49.462427 systemd-logind[1993]: Removed session 10. Jul 2 08:59:49.484970 systemd[1]: Started sshd@10-172.31.30.27:22-147.75.109.163:41660.service - OpenSSH per-connection server daemon (147.75.109.163:41660). Jul 2 08:59:49.671877 sshd[5275]: Accepted publickey for core from 147.75.109.163 port 41660 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:49.674449 sshd[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:49.683238 systemd-logind[1993]: New session 11 of user core. Jul 2 08:59:49.688772 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 08:59:50.032369 sshd[5275]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:50.042814 systemd[1]: sshd@10-172.31.30.27:22-147.75.109.163:41660.service: Deactivated successfully. Jul 2 08:59:50.051224 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:59:50.062135 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:59:50.096008 systemd[1]: Started sshd@11-172.31.30.27:22-147.75.109.163:41676.service - OpenSSH per-connection server daemon (147.75.109.163:41676). Jul 2 08:59:50.099802 systemd-logind[1993]: Removed session 11. Jul 2 08:59:50.321662 sshd[5289]: Accepted publickey for core from 147.75.109.163 port 41676 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:50.326077 sshd[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:50.343718 systemd-logind[1993]: New session 12 of user core. Jul 2 08:59:50.351344 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 08:59:50.536296 containerd[2009]: time="2024-07-02T08:59:50.532604927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:50.540839 containerd[2009]: time="2024-07-02T08:59:50.540772295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 08:59:50.543107 containerd[2009]: time="2024-07-02T08:59:50.543042263Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:50.558723 containerd[2009]: time="2024-07-02T08:59:50.558652979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:50.563846 containerd[2009]: time="2024-07-02T08:59:50.563782751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 3.053866311s" Jul 2 08:59:50.564109 containerd[2009]: time="2024-07-02T08:59:50.564025199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 08:59:50.568407 containerd[2009]: time="2024-07-02T08:59:50.568298231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 08:59:50.572375 containerd[2009]: time="2024-07-02T08:59:50.571802555Z" level=info msg="CreateContainer within sandbox \"0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 08:59:50.653684 containerd[2009]: time="2024-07-02T08:59:50.653602187Z" level=info msg="CreateContainer within sandbox \"0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"64b96e3b742d4f41823ad907f1db75e8833cca08120edf9a2b0ab6adae7e06d5\"" Jul 2 08:59:50.662778 containerd[2009]: time="2024-07-02T08:59:50.655719791Z" level=info msg="StartContainer for \"64b96e3b742d4f41823ad907f1db75e8833cca08120edf9a2b0ab6adae7e06d5\"" Jul 2 08:59:50.663055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415760263.mount: Deactivated successfully. Jul 2 08:59:50.733776 systemd-networkd[1850]: calidcbf3e74a6c: Gained IPv6LL Jul 2 08:59:50.789379 systemd[1]: run-containerd-runc-k8s.io-64b96e3b742d4f41823ad907f1db75e8833cca08120edf9a2b0ab6adae7e06d5-runc.sTsuVy.mount: Deactivated successfully. Jul 2 08:59:50.803934 systemd[1]: Started cri-containerd-64b96e3b742d4f41823ad907f1db75e8833cca08120edf9a2b0ab6adae7e06d5.scope - libcontainer container 64b96e3b742d4f41823ad907f1db75e8833cca08120edf9a2b0ab6adae7e06d5. Jul 2 08:59:50.809124 sshd[5289]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:50.821298 systemd[1]: sshd@11-172.31.30.27:22-147.75.109.163:41676.service: Deactivated successfully. Jul 2 08:59:50.831728 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:59:50.838499 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:59:50.842763 systemd-logind[1993]: Removed session 12. Jul 2 08:59:50.926330 containerd[2009]: time="2024-07-02T08:59:50.926174377Z" level=info msg="StartContainer for \"64b96e3b742d4f41823ad907f1db75e8833cca08120edf9a2b0ab6adae7e06d5\" returns successfully" Jul 2 08:59:51.408296 kubelet[3252]: I0702 08:59:51.408234 3252 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 08:59:51.408296 kubelet[3252]: I0702 08:59:51.408300 3252 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 08:59:52.939453 ntpd[1987]: Listen normally on 7 vxlan.calico 192.168.12.128:123 Jul 2 08:59:52.940752 ntpd[1987]: 2 Jul 08:59:52 ntpd[1987]: Listen normally on 7 vxlan.calico 192.168.12.128:123 Jul 2 08:59:52.940752 ntpd[1987]: 2 Jul 08:59:52 ntpd[1987]: Listen normally on 8 vxlan.calico [fe80::645f:47ff:fecb:9de9%4]:123 Jul 2 08:59:52.940752 ntpd[1987]: 2 Jul 08:59:52 ntpd[1987]: Listen normally on 9 cali9f898830a19 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 08:59:52.940752 ntpd[1987]: 2 Jul 08:59:52 ntpd[1987]: Listen normally on 10 calia6595643521 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 08:59:52.940752 ntpd[1987]: 2 Jul 08:59:52 ntpd[1987]: Listen normally on 11 calibc6ab919ad1 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 08:59:52.940752 ntpd[1987]: 2 Jul 08:59:52 ntpd[1987]: Listen normally on 12 calidcbf3e74a6c [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 08:59:52.939614 ntpd[1987]: Listen normally on 8 vxlan.calico [fe80::645f:47ff:fecb:9de9%4]:123 Jul 2 08:59:52.939698 ntpd[1987]: Listen normally on 9 cali9f898830a19 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 08:59:52.939767 ntpd[1987]: Listen normally on 10 calia6595643521 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 08:59:52.939835 ntpd[1987]: Listen normally on 11 calibc6ab919ad1 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 08:59:52.939907 ntpd[1987]: Listen normally on 12 calidcbf3e74a6c [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 08:59:54.990775 containerd[2009]: time="2024-07-02T08:59:54.990573989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:54.995313 containerd[2009]: time="2024-07-02T08:59:54.994548545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 08:59:54.997284 containerd[2009]: time="2024-07-02T08:59:54.997195301Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:55.008760 containerd[2009]: time="2024-07-02T08:59:55.008684701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:55.011564 containerd[2009]: time="2024-07-02T08:59:55.011269441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 4.439329426s" Jul 2 08:59:55.011564 containerd[2009]: time="2024-07-02T08:59:55.011363869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 08:59:55.056059 containerd[2009]: time="2024-07-02T08:59:55.055997029Z" level=info msg="CreateContainer within sandbox \"a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 08:59:55.083577 containerd[2009]: time="2024-07-02T08:59:55.083343877Z" level=info msg="CreateContainer within sandbox \"a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"00b56692b85bf70d14e83a9c57b5efab5f4f8421ad9f167bfbd85f102bb066b1\"" Jul 2 08:59:55.085867 containerd[2009]: time="2024-07-02T08:59:55.085729957Z" level=info msg="StartContainer for \"00b56692b85bf70d14e83a9c57b5efab5f4f8421ad9f167bfbd85f102bb066b1\"" Jul 2 08:59:55.160867 systemd[1]: Started cri-containerd-00b56692b85bf70d14e83a9c57b5efab5f4f8421ad9f167bfbd85f102bb066b1.scope - libcontainer container 00b56692b85bf70d14e83a9c57b5efab5f4f8421ad9f167bfbd85f102bb066b1. Jul 2 08:59:55.272576 containerd[2009]: time="2024-07-02T08:59:55.272403266Z" level=info msg="StartContainer for \"00b56692b85bf70d14e83a9c57b5efab5f4f8421ad9f167bfbd85f102bb066b1\" returns successfully" Jul 2 08:59:55.720205 kubelet[3252]: I0702 08:59:55.719257 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-58jnw" podStartSLOduration=32.033233492 podStartE2EDuration="36.7191856s" podCreationTimestamp="2024-07-02 08:59:19 +0000 UTC" firstStartedPulling="2024-07-02 08:59:45.879875635 +0000 UTC m=+47.935219607" lastFinishedPulling="2024-07-02 08:59:50.565827755 +0000 UTC m=+52.621171715" observedRunningTime="2024-07-02 08:59:51.686107404 +0000 UTC m=+53.741451424" watchObservedRunningTime="2024-07-02 08:59:55.7191856 +0000 UTC m=+57.774529572" Jul 2 08:59:55.816542 kubelet[3252]: I0702 08:59:55.816438 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-599895b6cb-pdhbb" podStartSLOduration=29.903962147 podStartE2EDuration="35.816376481s" podCreationTimestamp="2024-07-02 08:59:20 +0000 UTC" firstStartedPulling="2024-07-02 08:59:49.099456115 +0000 UTC m=+51.154800087" lastFinishedPulling="2024-07-02 08:59:55.011870437 +0000 UTC m=+57.067214421" observedRunningTime="2024-07-02 08:59:55.72102412 +0000 UTC m=+57.776368128" watchObservedRunningTime="2024-07-02 08:59:55.816376481 +0000 UTC m=+57.871720453" Jul 2 08:59:55.852220 systemd[1]: Started sshd@12-172.31.30.27:22-147.75.109.163:37190.service - OpenSSH per-connection server daemon (147.75.109.163:37190). Jul 2 08:59:56.050863 sshd[5423]: Accepted publickey for core from 147.75.109.163 port 37190 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:59:56.053785 sshd[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:59:56.069611 systemd-logind[1993]: New session 13 of user core. Jul 2 08:59:56.078819 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 08:59:56.391801 sshd[5423]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:56.402161 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:59:56.406289 systemd[1]: sshd@12-172.31.30.27:22-147.75.109.163:37190.service: Deactivated successfully. Jul 2 08:59:56.414397 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:59:56.418748 systemd-logind[1993]: Removed session 13. Jul 2 08:59:58.209175 containerd[2009]: time="2024-07-02T08:59:58.209113433Z" level=info msg="StopPodSandbox for \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\"" Jul 2 08:59:58.350776 containerd[2009]: 2024-07-02 08:59:58.283 [WARNING][5448] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"69284943-ceac-4a72-9f41-ad6e22e1a962", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277", Pod:"coredns-76f75df574-9d8vj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6595643521", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:58.350776 containerd[2009]: 2024-07-02 08:59:58.283 [INFO][5448] k8s.go 608: Cleaning up netns ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:58.350776 containerd[2009]: 2024-07-02 08:59:58.283 [INFO][5448] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" iface="eth0" netns="" Jul 2 08:59:58.350776 containerd[2009]: 2024-07-02 08:59:58.284 [INFO][5448] k8s.go 615: Releasing IP address(es) ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:58.350776 containerd[2009]: 2024-07-02 08:59:58.284 [INFO][5448] utils.go 188: Calico CNI releasing IP address ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:58.350776 containerd[2009]: 2024-07-02 08:59:58.327 [INFO][5456] ipam_plugin.go 411: Releasing address using handleID ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" HandleID="k8s-pod-network.77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:58.350776 containerd[2009]: 2024-07-02 08:59:58.327 [INFO][5456] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:58.350776 containerd[2009]: 2024-07-02 08:59:58.328 [INFO][5456] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:58.350776 containerd[2009]: 2024-07-02 08:59:58.340 [WARNING][5456] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" HandleID="k8s-pod-network.77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:58.350776 containerd[2009]: 2024-07-02 08:59:58.340 [INFO][5456] ipam_plugin.go 439: Releasing address using workloadID ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" HandleID="k8s-pod-network.77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:58.350776 containerd[2009]: 2024-07-02 08:59:58.343 [INFO][5456] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:58.350776 containerd[2009]: 2024-07-02 08:59:58.346 [INFO][5448] k8s.go 621: Teardown processing complete. ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:58.350776 containerd[2009]: time="2024-07-02T08:59:58.349377845Z" level=info msg="TearDown network for sandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\" successfully" Jul 2 08:59:58.350776 containerd[2009]: time="2024-07-02T08:59:58.349424069Z" level=info msg="StopPodSandbox for \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\" returns successfully" Jul 2 08:59:58.350776 containerd[2009]: time="2024-07-02T08:59:58.350338013Z" level=info msg="RemovePodSandbox for \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\"" Jul 2 08:59:58.350776 containerd[2009]: time="2024-07-02T08:59:58.350392037Z" level=info msg="Forcibly stopping sandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\"" Jul 2 08:59:58.497071 containerd[2009]: 2024-07-02 08:59:58.431 [WARNING][5474] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"69284943-ceac-4a72-9f41-ad6e22e1a962", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"39ef884c9c63ca52920342da5146df9b3db77ff9c431b598cefa4c75698bd277", Pod:"coredns-76f75df574-9d8vj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6595643521", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:58.497071 containerd[2009]: 2024-07-02 08:59:58.432 [INFO][5474] k8s.go 608: Cleaning up netns ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:58.497071 containerd[2009]: 2024-07-02 08:59:58.432 [INFO][5474] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" iface="eth0" netns="" Jul 2 08:59:58.497071 containerd[2009]: 2024-07-02 08:59:58.433 [INFO][5474] k8s.go 615: Releasing IP address(es) ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:58.497071 containerd[2009]: 2024-07-02 08:59:58.433 [INFO][5474] utils.go 188: Calico CNI releasing IP address ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:58.497071 containerd[2009]: 2024-07-02 08:59:58.476 [INFO][5480] ipam_plugin.go 411: Releasing address using handleID ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" HandleID="k8s-pod-network.77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:58.497071 containerd[2009]: 2024-07-02 08:59:58.476 [INFO][5480] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:58.497071 containerd[2009]: 2024-07-02 08:59:58.476 [INFO][5480] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:58.497071 containerd[2009]: 2024-07-02 08:59:58.488 [WARNING][5480] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" HandleID="k8s-pod-network.77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:58.497071 containerd[2009]: 2024-07-02 08:59:58.488 [INFO][5480] ipam_plugin.go 439: Releasing address using workloadID ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" HandleID="k8s-pod-network.77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--9d8vj-eth0" Jul 2 08:59:58.497071 containerd[2009]: 2024-07-02 08:59:58.491 [INFO][5480] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:58.497071 containerd[2009]: 2024-07-02 08:59:58.493 [INFO][5474] k8s.go 621: Teardown processing complete. ContainerID="77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e" Jul 2 08:59:58.497071 containerd[2009]: time="2024-07-02T08:59:58.497007954Z" level=info msg="TearDown network for sandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\" successfully" Jul 2 08:59:58.505692 containerd[2009]: time="2024-07-02T08:59:58.505596666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:59:58.505889 containerd[2009]: time="2024-07-02T08:59:58.505711434Z" level=info msg="RemovePodSandbox \"77fef0a7a2c336d2cb32adfcdf3703645106d19194e85156b9245b8e70c3cb0e\" returns successfully" Jul 2 08:59:58.506709 containerd[2009]: time="2024-07-02T08:59:58.506632110Z" level=info msg="StopPodSandbox for \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\"" Jul 2 08:59:58.634015 containerd[2009]: 2024-07-02 08:59:58.571 [WARNING][5498] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b", Pod:"coredns-76f75df574-prdlg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc6ab919ad1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:58.634015 containerd[2009]: 2024-07-02 08:59:58.572 [INFO][5498] k8s.go 608: Cleaning up netns ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:58.634015 containerd[2009]: 2024-07-02 08:59:58.572 [INFO][5498] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" iface="eth0" netns="" Jul 2 08:59:58.634015 containerd[2009]: 2024-07-02 08:59:58.572 [INFO][5498] k8s.go 615: Releasing IP address(es) ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:58.634015 containerd[2009]: 2024-07-02 08:59:58.572 [INFO][5498] utils.go 188: Calico CNI releasing IP address ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:58.634015 containerd[2009]: 2024-07-02 08:59:58.611 [INFO][5504] ipam_plugin.go 411: Releasing address using handleID ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" HandleID="k8s-pod-network.67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:58.634015 containerd[2009]: 2024-07-02 08:59:58.611 [INFO][5504] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:58.634015 containerd[2009]: 2024-07-02 08:59:58.611 [INFO][5504] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:58.634015 containerd[2009]: 2024-07-02 08:59:58.625 [WARNING][5504] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" HandleID="k8s-pod-network.67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:58.634015 containerd[2009]: 2024-07-02 08:59:58.625 [INFO][5504] ipam_plugin.go 439: Releasing address using workloadID ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" HandleID="k8s-pod-network.67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:58.634015 containerd[2009]: 2024-07-02 08:59:58.628 [INFO][5504] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:58.634015 containerd[2009]: 2024-07-02 08:59:58.631 [INFO][5498] k8s.go 621: Teardown processing complete. ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:58.635176 containerd[2009]: time="2024-07-02T08:59:58.634068427Z" level=info msg="TearDown network for sandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\" successfully" Jul 2 08:59:58.635176 containerd[2009]: time="2024-07-02T08:59:58.634107847Z" level=info msg="StopPodSandbox for \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\" returns successfully" Jul 2 08:59:58.636009 containerd[2009]: time="2024-07-02T08:59:58.635914711Z" level=info msg="RemovePodSandbox for \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\"" Jul 2 08:59:58.636125 containerd[2009]: time="2024-07-02T08:59:58.636012115Z" level=info msg="Forcibly stopping sandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\"" Jul 2 08:59:58.796204 containerd[2009]: 2024-07-02 08:59:58.705 [WARNING][5523] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"04168a1b-6f74-4de0-b54f-d7ae7ddfc2f9", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"64180b0dbf40df71a22fcf3a3694fee4d3e5dfc96a1237d6306da914c465895b", Pod:"coredns-76f75df574-prdlg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc6ab919ad1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:58.796204 containerd[2009]: 2024-07-02 08:59:58.705 [INFO][5523] k8s.go 608: Cleaning up netns ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:58.796204 containerd[2009]: 2024-07-02 08:59:58.705 [INFO][5523] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" iface="eth0" netns="" Jul 2 08:59:58.796204 containerd[2009]: 2024-07-02 08:59:58.705 [INFO][5523] k8s.go 615: Releasing IP address(es) ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:58.796204 containerd[2009]: 2024-07-02 08:59:58.705 [INFO][5523] utils.go 188: Calico CNI releasing IP address ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:58.796204 containerd[2009]: 2024-07-02 08:59:58.764 [INFO][5530] ipam_plugin.go 411: Releasing address using handleID ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" HandleID="k8s-pod-network.67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:58.796204 containerd[2009]: 2024-07-02 08:59:58.765 [INFO][5530] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:58.796204 containerd[2009]: 2024-07-02 08:59:58.765 [INFO][5530] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:58.796204 containerd[2009]: 2024-07-02 08:59:58.782 [WARNING][5530] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" HandleID="k8s-pod-network.67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:58.796204 containerd[2009]: 2024-07-02 08:59:58.782 [INFO][5530] ipam_plugin.go 439: Releasing address using workloadID ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" HandleID="k8s-pod-network.67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Workload="ip--172--31--30--27-k8s-coredns--76f75df574--prdlg-eth0" Jul 2 08:59:58.796204 containerd[2009]: 2024-07-02 08:59:58.786 [INFO][5530] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:58.796204 containerd[2009]: 2024-07-02 08:59:58.792 [INFO][5523] k8s.go 621: Teardown processing complete. ContainerID="67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c" Jul 2 08:59:58.796204 containerd[2009]: time="2024-07-02T08:59:58.795683144Z" level=info msg="TearDown network for sandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\" successfully" Jul 2 08:59:58.802311 containerd[2009]: time="2024-07-02T08:59:58.802254152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:59:58.802815 containerd[2009]: time="2024-07-02T08:59:58.802595912Z" level=info msg="RemovePodSandbox \"67df26cd663e673a8ab6c87345fa3230edb7eb133304cd0c35dcbfdabc57bc7c\" returns successfully" Jul 2 08:59:58.803637 containerd[2009]: time="2024-07-02T08:59:58.803546828Z" level=info msg="StopPodSandbox for \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\"" Jul 2 08:59:58.937622 containerd[2009]: 2024-07-02 08:59:58.877 [WARNING][5548] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"854c111d-7e31-40e1-a3bc-810de7814240", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca", Pod:"csi-node-driver-58jnw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9f898830a19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:58.937622 containerd[2009]: 2024-07-02 08:59:58.878 [INFO][5548] k8s.go 608: Cleaning up netns ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:58.937622 containerd[2009]: 2024-07-02 08:59:58.878 [INFO][5548] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" iface="eth0" netns="" Jul 2 08:59:58.937622 containerd[2009]: 2024-07-02 08:59:58.878 [INFO][5548] k8s.go 615: Releasing IP address(es) ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:58.937622 containerd[2009]: 2024-07-02 08:59:58.878 [INFO][5548] utils.go 188: Calico CNI releasing IP address ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:58.937622 containerd[2009]: 2024-07-02 08:59:58.916 [INFO][5554] ipam_plugin.go 411: Releasing address using handleID ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" HandleID="k8s-pod-network.0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Workload="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:58.937622 containerd[2009]: 2024-07-02 08:59:58.916 [INFO][5554] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:58.937622 containerd[2009]: 2024-07-02 08:59:58.917 [INFO][5554] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:58.937622 containerd[2009]: 2024-07-02 08:59:58.929 [WARNING][5554] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" HandleID="k8s-pod-network.0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Workload="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:58.937622 containerd[2009]: 2024-07-02 08:59:58.929 [INFO][5554] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" HandleID="k8s-pod-network.0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Workload="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:58.937622 containerd[2009]: 2024-07-02 08:59:58.932 [INFO][5554] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:58.937622 containerd[2009]: 2024-07-02 08:59:58.935 [INFO][5548] k8s.go 621: Teardown processing complete. ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:58.939571 containerd[2009]: time="2024-07-02T08:59:58.937894496Z" level=info msg="TearDown network for sandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\" successfully" Jul 2 08:59:58.939571 containerd[2009]: time="2024-07-02T08:59:58.937936328Z" level=info msg="StopPodSandbox for \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\" returns successfully" Jul 2 08:59:58.940237 containerd[2009]: time="2024-07-02T08:59:58.939756056Z" level=info msg="RemovePodSandbox for \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\"" Jul 2 08:59:58.940237 containerd[2009]: time="2024-07-02T08:59:58.939818180Z" level=info msg="Forcibly stopping sandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\"" Jul 2 08:59:59.079213 containerd[2009]: 2024-07-02 08:59:59.007 [WARNING][5573] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"854c111d-7e31-40e1-a3bc-810de7814240", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"0dca9d20017af0bba7178948b012012296e8ba1aca5a07789c698b7ae0a87aca", Pod:"csi-node-driver-58jnw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9f898830a19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:59.079213 containerd[2009]: 2024-07-02 08:59:59.007 [INFO][5573] k8s.go 608: Cleaning up netns ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:59.079213 containerd[2009]: 2024-07-02 08:59:59.007 [INFO][5573] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" iface="eth0" netns="" Jul 2 08:59:59.079213 containerd[2009]: 2024-07-02 08:59:59.008 [INFO][5573] k8s.go 615: Releasing IP address(es) ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:59.079213 containerd[2009]: 2024-07-02 08:59:59.008 [INFO][5573] utils.go 188: Calico CNI releasing IP address ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:59.079213 containerd[2009]: 2024-07-02 08:59:59.045 [INFO][5579] ipam_plugin.go 411: Releasing address using handleID ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" HandleID="k8s-pod-network.0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Workload="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:59.079213 containerd[2009]: 2024-07-02 08:59:59.046 [INFO][5579] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:59.079213 containerd[2009]: 2024-07-02 08:59:59.046 [INFO][5579] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:59.079213 containerd[2009]: 2024-07-02 08:59:59.065 [WARNING][5579] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" HandleID="k8s-pod-network.0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Workload="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:59.079213 containerd[2009]: 2024-07-02 08:59:59.065 [INFO][5579] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" HandleID="k8s-pod-network.0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Workload="ip--172--31--30--27-k8s-csi--node--driver--58jnw-eth0" Jul 2 08:59:59.079213 containerd[2009]: 2024-07-02 08:59:59.068 [INFO][5579] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:59.079213 containerd[2009]: 2024-07-02 08:59:59.074 [INFO][5573] k8s.go 621: Teardown processing complete. ContainerID="0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd" Jul 2 08:59:59.080604 containerd[2009]: time="2024-07-02T08:59:59.079164257Z" level=info msg="TearDown network for sandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\" successfully" Jul 2 08:59:59.087821 containerd[2009]: time="2024-07-02T08:59:59.087723857Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:59:59.088276 containerd[2009]: time="2024-07-02T08:59:59.087842933Z" level=info msg="RemovePodSandbox \"0cb6ce772943c3556d27c339a42e2677b5d4ceba4f05fbf3d66f45bc1af7c9bd\" returns successfully" Jul 2 08:59:59.088966 containerd[2009]: time="2024-07-02T08:59:59.088834517Z" level=info msg="StopPodSandbox for \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\"" Jul 2 08:59:59.231942 containerd[2009]: 2024-07-02 08:59:59.163 [WARNING][5599] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0", GenerateName:"calico-kube-controllers-599895b6cb-", Namespace:"calico-system", SelfLink:"", UID:"eaf1a623-0ff8-41cc-be6c-c8208e0fe27d", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"599895b6cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290", Pod:"calico-kube-controllers-599895b6cb-pdhbb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidcbf3e74a6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:59.231942 containerd[2009]: 2024-07-02 08:59:59.163 [INFO][5599] k8s.go 608: Cleaning up netns ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:59.231942 containerd[2009]: 2024-07-02 08:59:59.164 [INFO][5599] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" iface="eth0" netns="" Jul 2 08:59:59.231942 containerd[2009]: 2024-07-02 08:59:59.164 [INFO][5599] k8s.go 615: Releasing IP address(es) ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:59.231942 containerd[2009]: 2024-07-02 08:59:59.164 [INFO][5599] utils.go 188: Calico CNI releasing IP address ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:59.231942 containerd[2009]: 2024-07-02 08:59:59.204 [INFO][5605] ipam_plugin.go 411: Releasing address using handleID ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" HandleID="k8s-pod-network.82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Workload="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:59.231942 containerd[2009]: 2024-07-02 08:59:59.205 [INFO][5605] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:59.231942 containerd[2009]: 2024-07-02 08:59:59.205 [INFO][5605] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:59.231942 containerd[2009]: 2024-07-02 08:59:59.223 [WARNING][5605] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" HandleID="k8s-pod-network.82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Workload="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:59.231942 containerd[2009]: 2024-07-02 08:59:59.223 [INFO][5605] ipam_plugin.go 439: Releasing address using workloadID ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" HandleID="k8s-pod-network.82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Workload="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:59.231942 containerd[2009]: 2024-07-02 08:59:59.225 [INFO][5605] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:59.231942 containerd[2009]: 2024-07-02 08:59:59.228 [INFO][5599] k8s.go 621: Teardown processing complete. ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:59.233445 containerd[2009]: time="2024-07-02T08:59:59.231982278Z" level=info msg="TearDown network for sandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\" successfully" Jul 2 08:59:59.233445 containerd[2009]: time="2024-07-02T08:59:59.232024506Z" level=info msg="StopPodSandbox for \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\" returns successfully" Jul 2 08:59:59.233445 containerd[2009]: time="2024-07-02T08:59:59.233276190Z" level=info msg="RemovePodSandbox for \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\"" Jul 2 08:59:59.233445 containerd[2009]: time="2024-07-02T08:59:59.233328882Z" level=info msg="Forcibly stopping sandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\"" Jul 2 08:59:59.367239 containerd[2009]: 2024-07-02 08:59:59.305 [WARNING][5624] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0", GenerateName:"calico-kube-controllers-599895b6cb-", Namespace:"calico-system", SelfLink:"", UID:"eaf1a623-0ff8-41cc-be6c-c8208e0fe27d", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"599895b6cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"a6d32283363aa9e00c70280d96112b48284455e54de35b17c36b4676ec021290", Pod:"calico-kube-controllers-599895b6cb-pdhbb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidcbf3e74a6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:59:59.367239 containerd[2009]: 2024-07-02 08:59:59.306 [INFO][5624] k8s.go 608: Cleaning up netns ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:59.367239 containerd[2009]: 2024-07-02 08:59:59.306 [INFO][5624] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" iface="eth0" netns="" Jul 2 08:59:59.367239 containerd[2009]: 2024-07-02 08:59:59.306 [INFO][5624] k8s.go 615: Releasing IP address(es) ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:59.367239 containerd[2009]: 2024-07-02 08:59:59.306 [INFO][5624] utils.go 188: Calico CNI releasing IP address ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:59.367239 containerd[2009]: 2024-07-02 08:59:59.345 [INFO][5630] ipam_plugin.go 411: Releasing address using handleID ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" HandleID="k8s-pod-network.82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Workload="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:59.367239 containerd[2009]: 2024-07-02 08:59:59.345 [INFO][5630] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:59:59.367239 containerd[2009]: 2024-07-02 08:59:59.345 [INFO][5630] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:59:59.367239 containerd[2009]: 2024-07-02 08:59:59.358 [WARNING][5630] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" HandleID="k8s-pod-network.82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Workload="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:59.367239 containerd[2009]: 2024-07-02 08:59:59.358 [INFO][5630] ipam_plugin.go 439: Releasing address using workloadID ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" HandleID="k8s-pod-network.82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Workload="ip--172--31--30--27-k8s-calico--kube--controllers--599895b6cb--pdhbb-eth0" Jul 2 08:59:59.367239 containerd[2009]: 2024-07-02 08:59:59.361 [INFO][5630] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:59:59.367239 containerd[2009]: 2024-07-02 08:59:59.364 [INFO][5624] k8s.go 621: Teardown processing complete. ContainerID="82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e" Jul 2 08:59:59.367239 containerd[2009]: time="2024-07-02T08:59:59.367072866Z" level=info msg="TearDown network for sandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\" successfully" Jul 2 08:59:59.373391 containerd[2009]: time="2024-07-02T08:59:59.373331838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:59:59.373858 containerd[2009]: time="2024-07-02T08:59:59.373432086Z" level=info msg="RemovePodSandbox \"82bf225ce08c77af9e943d0376cbd840a45e0917ab5421d9fb92c6a2e274166e\" returns successfully" Jul 2 09:00:01.137895 systemd[1]: run-containerd-runc-k8s.io-00b56692b85bf70d14e83a9c57b5efab5f4f8421ad9f167bfbd85f102bb066b1-runc.0kE8JN.mount: Deactivated successfully. Jul 2 09:00:01.436073 systemd[1]: Started sshd@13-172.31.30.27:22-147.75.109.163:37192.service - OpenSSH per-connection server daemon (147.75.109.163:37192). Jul 2 09:00:01.625317 sshd[5656]: Accepted publickey for core from 147.75.109.163 port 37192 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:01.628610 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:01.641894 systemd-logind[1993]: New session 14 of user core. Jul 2 09:00:01.647910 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 09:00:01.917874 sshd[5656]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:01.924610 systemd[1]: sshd@13-172.31.30.27:22-147.75.109.163:37192.service: Deactivated successfully. Jul 2 09:00:01.928175 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 09:00:01.930679 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. Jul 2 09:00:01.933908 systemd-logind[1993]: Removed session 14. Jul 2 09:00:06.959015 systemd[1]: Started sshd@14-172.31.30.27:22-147.75.109.163:57054.service - OpenSSH per-connection server daemon (147.75.109.163:57054). Jul 2 09:00:07.134839 sshd[5680]: Accepted publickey for core from 147.75.109.163 port 57054 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:07.137557 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:07.146017 systemd-logind[1993]: New session 15 of user core. Jul 2 09:00:07.153796 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 09:00:07.453958 sshd[5680]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:07.465422 systemd[1]: sshd@14-172.31.30.27:22-147.75.109.163:57054.service: Deactivated successfully. Jul 2 09:00:07.473822 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 09:00:07.475915 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. Jul 2 09:00:07.479256 systemd-logind[1993]: Removed session 15. Jul 2 09:00:12.493120 systemd[1]: Started sshd@15-172.31.30.27:22-147.75.109.163:45816.service - OpenSSH per-connection server daemon (147.75.109.163:45816). Jul 2 09:00:12.690198 sshd[5718]: Accepted publickey for core from 147.75.109.163 port 45816 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:12.692240 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:12.707325 systemd-logind[1993]: New session 16 of user core. Jul 2 09:00:12.713024 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 09:00:12.996587 sshd[5718]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:13.003179 systemd[1]: sshd@15-172.31.30.27:22-147.75.109.163:45816.service: Deactivated successfully. Jul 2 09:00:13.007165 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 09:00:13.010028 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. Jul 2 09:00:13.012151 systemd-logind[1993]: Removed session 16. Jul 2 09:00:18.044012 systemd[1]: Started sshd@16-172.31.30.27:22-147.75.109.163:45824.service - OpenSSH per-connection server daemon (147.75.109.163:45824). Jul 2 09:00:18.227016 sshd[5736]: Accepted publickey for core from 147.75.109.163 port 45824 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:18.232300 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:18.244978 systemd-logind[1993]: New session 17 of user core. Jul 2 09:00:18.257170 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 09:00:18.611035 sshd[5736]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:18.620278 systemd[1]: sshd@16-172.31.30.27:22-147.75.109.163:45824.service: Deactivated successfully. Jul 2 09:00:18.627640 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 09:00:18.629689 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. Jul 2 09:00:18.653323 systemd[1]: Started sshd@17-172.31.30.27:22-147.75.109.163:45838.service - OpenSSH per-connection server daemon (147.75.109.163:45838). Jul 2 09:00:18.657031 systemd-logind[1993]: Removed session 17. Jul 2 09:00:18.843125 sshd[5749]: Accepted publickey for core from 147.75.109.163 port 45838 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:18.846679 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:18.858835 systemd-logind[1993]: New session 18 of user core. Jul 2 09:00:18.867201 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 09:00:19.408831 sshd[5749]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:19.418329 systemd[1]: sshd@17-172.31.30.27:22-147.75.109.163:45838.service: Deactivated successfully. Jul 2 09:00:19.423425 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 09:00:19.426866 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. Jul 2 09:00:19.455019 systemd[1]: Started sshd@18-172.31.30.27:22-147.75.109.163:45846.service - OpenSSH per-connection server daemon (147.75.109.163:45846). Jul 2 09:00:19.458936 systemd-logind[1993]: Removed session 18. Jul 2 09:00:19.547286 kubelet[3252]: I0702 09:00:19.547074 3252 topology_manager.go:215] "Topology Admit Handler" podUID="a55f8f18-ce14-44b3-9e9b-d747cdb86014" podNamespace="calico-apiserver" podName="calico-apiserver-b46668f54-5ckds" Jul 2 09:00:19.570293 systemd[1]: Created slice kubepods-besteffort-poda55f8f18_ce14_44b3_9e9b_d747cdb86014.slice - libcontainer container kubepods-besteffort-poda55f8f18_ce14_44b3_9e9b_d747cdb86014.slice. Jul 2 09:00:19.663747 sshd[5760]: Accepted publickey for core from 147.75.109.163 port 45846 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:19.667462 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:19.679641 systemd-logind[1993]: New session 19 of user core. Jul 2 09:00:19.685848 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 09:00:19.688521 kubelet[3252]: I0702 09:00:19.686848 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp6qf\" (UniqueName: \"kubernetes.io/projected/a55f8f18-ce14-44b3-9e9b-d747cdb86014-kube-api-access-zp6qf\") pod \"calico-apiserver-b46668f54-5ckds\" (UID: \"a55f8f18-ce14-44b3-9e9b-d747cdb86014\") " pod="calico-apiserver/calico-apiserver-b46668f54-5ckds" Jul 2 09:00:19.688521 kubelet[3252]: I0702 09:00:19.686996 3252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a55f8f18-ce14-44b3-9e9b-d747cdb86014-calico-apiserver-certs\") pod \"calico-apiserver-b46668f54-5ckds\" (UID: \"a55f8f18-ce14-44b3-9e9b-d747cdb86014\") " pod="calico-apiserver/calico-apiserver-b46668f54-5ckds" Jul 2 09:00:19.788794 kubelet[3252]: E0702 09:00:19.787748 3252 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 09:00:19.788794 kubelet[3252]: E0702 09:00:19.788549 3252 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55f8f18-ce14-44b3-9e9b-d747cdb86014-calico-apiserver-certs podName:a55f8f18-ce14-44b3-9e9b-d747cdb86014 nodeName:}" failed. No retries permitted until 2024-07-02 09:00:20.287836988 +0000 UTC m=+82.343180960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a55f8f18-ce14-44b3-9e9b-d747cdb86014-calico-apiserver-certs") pod "calico-apiserver-b46668f54-5ckds" (UID: "a55f8f18-ce14-44b3-9e9b-d747cdb86014") : secret "calico-apiserver-certs" not found Jul 2 09:00:20.483805 containerd[2009]: time="2024-07-02T09:00:20.483726879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b46668f54-5ckds,Uid:a55f8f18-ce14-44b3-9e9b-d747cdb86014,Namespace:calico-apiserver,Attempt:0,}" Jul 2 09:00:21.096156 systemd-networkd[1850]: cali7b147eb6782: Link UP Jul 2 09:00:21.097770 systemd-networkd[1850]: cali7b147eb6782: Gained carrier Jul 2 09:00:21.108033 (udev-worker)[5798]: Network interface NamePolicy= disabled on kernel command line. Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:20.705 [INFO][5779] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0 calico-apiserver-b46668f54- calico-apiserver a55f8f18-ce14-44b3-9e9b-d747cdb86014 1034 0 2024-07-02 09:00:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b46668f54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-27 calico-apiserver-b46668f54-5ckds eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7b147eb6782 [] []}} ContainerID="e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" Namespace="calico-apiserver" Pod="calico-apiserver-b46668f54-5ckds" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-" Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:20.705 [INFO][5779] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" Namespace="calico-apiserver" Pod="calico-apiserver-b46668f54-5ckds" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0" Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:20.943 [INFO][5791] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" HandleID="k8s-pod-network.e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" Workload="ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0" Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.006 [INFO][5791] ipam_plugin.go 264: Auto assigning IP ContainerID="e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" HandleID="k8s-pod-network.e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" Workload="ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002bedb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-27", "pod":"calico-apiserver-b46668f54-5ckds", "timestamp":"2024-07-02 09:00:20.94308345 +0000 UTC"}, Hostname:"ip-172-31-30-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.007 [INFO][5791] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.007 [INFO][5791] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.007 [INFO][5791] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-27' Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.030 [INFO][5791] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" host="ip-172-31-30-27" Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.039 [INFO][5791] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-27" Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.050 [INFO][5791] ipam.go 489: Trying affinity for 192.168.12.128/26 host="ip-172-31-30-27" Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.055 [INFO][5791] ipam.go 155: Attempting to load block cidr=192.168.12.128/26 host="ip-172-31-30-27" Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.060 [INFO][5791] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ip-172-31-30-27" Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.061 [INFO][5791] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" host="ip-172-31-30-27" Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.063 [INFO][5791] ipam.go 1685: Creating new handle: k8s-pod-network.e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9 Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.072 [INFO][5791] ipam.go 1203: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" host="ip-172-31-30-27" Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.083 [INFO][5791] ipam.go 1216: Successfully claimed IPs: [192.168.12.133/26] block=192.168.12.128/26 handle="k8s-pod-network.e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" host="ip-172-31-30-27" Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.083 [INFO][5791] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.133/26] handle="k8s-pod-network.e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" host="ip-172-31-30-27" Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.083 [INFO][5791] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:21.134966 containerd[2009]: 2024-07-02 09:00:21.083 [INFO][5791] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.12.133/26] IPv6=[] ContainerID="e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" HandleID="k8s-pod-network.e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" Workload="ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0" Jul 2 09:00:21.136215 containerd[2009]: 2024-07-02 09:00:21.088 [INFO][5779] k8s.go 386: Populated endpoint ContainerID="e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" Namespace="calico-apiserver" Pod="calico-apiserver-b46668f54-5ckds" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0", GenerateName:"calico-apiserver-b46668f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"a55f8f18-ce14-44b3-9e9b-d747cdb86014", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b46668f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"", Pod:"calico-apiserver-b46668f54-5ckds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b147eb6782", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:21.136215 containerd[2009]: 2024-07-02 09:00:21.089 [INFO][5779] k8s.go 387: Calico CNI using IPs: [192.168.12.133/32] ContainerID="e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" Namespace="calico-apiserver" Pod="calico-apiserver-b46668f54-5ckds" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0" Jul 2 09:00:21.136215 containerd[2009]: 2024-07-02 09:00:21.089 [INFO][5779] dataplane_linux.go 68: Setting the host side veth name to cali7b147eb6782 ContainerID="e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" Namespace="calico-apiserver" Pod="calico-apiserver-b46668f54-5ckds" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0" Jul 2 09:00:21.136215 containerd[2009]: 2024-07-02 09:00:21.099 [INFO][5779] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" Namespace="calico-apiserver" Pod="calico-apiserver-b46668f54-5ckds" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0" Jul 2 09:00:21.136215 containerd[2009]: 2024-07-02 09:00:21.100 [INFO][5779] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" Namespace="calico-apiserver" Pod="calico-apiserver-b46668f54-5ckds" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0", GenerateName:"calico-apiserver-b46668f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"a55f8f18-ce14-44b3-9e9b-d747cdb86014", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b46668f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-27", ContainerID:"e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9", Pod:"calico-apiserver-b46668f54-5ckds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b147eb6782", MAC:"92:41:38:62:ec:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:21.136215 containerd[2009]: 2024-07-02 09:00:21.126 [INFO][5779] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9" Namespace="calico-apiserver" Pod="calico-apiserver-b46668f54-5ckds" WorkloadEndpoint="ip--172--31--30--27-k8s-calico--apiserver--b46668f54--5ckds-eth0" Jul 2 09:00:21.208199 containerd[2009]: time="2024-07-02T09:00:21.207914115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:00:21.208649 containerd[2009]: time="2024-07-02T09:00:21.208122759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:21.209511 containerd[2009]: time="2024-07-02T09:00:21.208793967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:00:21.210774 containerd[2009]: time="2024-07-02T09:00:21.210449355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:21.277895 systemd[1]: Started cri-containerd-e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9.scope - libcontainer container e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9. Jul 2 09:00:21.816638 containerd[2009]: time="2024-07-02T09:00:21.816412758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b46668f54-5ckds,Uid:a55f8f18-ce14-44b3-9e9b-d747cdb86014,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9\"" Jul 2 09:00:21.824971 containerd[2009]: time="2024-07-02T09:00:21.823910370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 09:00:23.053750 systemd-networkd[1850]: cali7b147eb6782: Gained IPv6LL Jul 2 09:00:23.380834 sshd[5760]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:23.389112 systemd[1]: sshd@18-172.31.30.27:22-147.75.109.163:45846.service: Deactivated successfully. Jul 2 09:00:23.396082 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 09:00:23.396566 systemd[1]: session-19.scope: Consumed 1.063s CPU time. Jul 2 09:00:23.401093 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. Jul 2 09:00:23.426990 systemd[1]: Started sshd@19-172.31.30.27:22-147.75.109.163:59432.service - OpenSSH per-connection server daemon (147.75.109.163:59432). Jul 2 09:00:23.430800 systemd-logind[1993]: Removed session 19. Jul 2 09:00:23.636717 sshd[5862]: Accepted publickey for core from 147.75.109.163 port 59432 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:23.643642 sshd[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:23.659724 systemd-logind[1993]: New session 20 of user core. Jul 2 09:00:23.664802 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 09:00:24.516199 sshd[5862]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:24.527266 systemd[1]: sshd@19-172.31.30.27:22-147.75.109.163:59432.service: Deactivated successfully. Jul 2 09:00:24.534422 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 09:00:24.549624 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. Jul 2 09:00:24.578012 systemd[1]: Started sshd@20-172.31.30.27:22-147.75.109.163:59442.service - OpenSSH per-connection server daemon (147.75.109.163:59442). Jul 2 09:00:24.584039 systemd-logind[1993]: Removed session 20. Jul 2 09:00:24.800190 sshd[5885]: Accepted publickey for core from 147.75.109.163 port 59442 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:24.803867 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:24.822307 systemd-logind[1993]: New session 21 of user core. Jul 2 09:00:24.828165 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 09:00:25.224188 sshd[5885]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:25.235823 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 09:00:25.239549 systemd[1]: sshd@20-172.31.30.27:22-147.75.109.163:59442.service: Deactivated successfully. Jul 2 09:00:25.256400 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. Jul 2 09:00:25.261352 systemd-logind[1993]: Removed session 21. Jul 2 09:00:25.726131 containerd[2009]: time="2024-07-02T09:00:25.726018477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:25.728421 containerd[2009]: time="2024-07-02T09:00:25.728358705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jul 2 09:00:25.732684 containerd[2009]: time="2024-07-02T09:00:25.732627969Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:25.740779 containerd[2009]: time="2024-07-02T09:00:25.740706945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:25.742876 containerd[2009]: time="2024-07-02T09:00:25.742791729Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 3.918806551s" Jul 2 09:00:25.742876 containerd[2009]: time="2024-07-02T09:00:25.742863861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 09:00:25.749301 containerd[2009]: time="2024-07-02T09:00:25.748241469Z" level=info msg="CreateContainer within sandbox \"e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 09:00:25.772439 containerd[2009]: time="2024-07-02T09:00:25.772326718Z" level=info msg="CreateContainer within sandbox \"e4935c26fe5d12fdf95877325bed5e7239b0c15e977d72fdf28c5015c5ebb6a9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5f651076c14ab304ddb7c5efc196c56093c65049560bc3cb076862a1b99a74e4\"" Jul 2 09:00:25.773763 containerd[2009]: time="2024-07-02T09:00:25.773396110Z" level=info msg="StartContainer for \"5f651076c14ab304ddb7c5efc196c56093c65049560bc3cb076862a1b99a74e4\"" Jul 2 09:00:25.862829 systemd[1]: Started cri-containerd-5f651076c14ab304ddb7c5efc196c56093c65049560bc3cb076862a1b99a74e4.scope - libcontainer container 5f651076c14ab304ddb7c5efc196c56093c65049560bc3cb076862a1b99a74e4. Jul 2 09:00:25.939390 ntpd[1987]: Listen normally on 13 cali7b147eb6782 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 09:00:25.942285 ntpd[1987]: 2 Jul 09:00:25 ntpd[1987]: Listen normally on 13 cali7b147eb6782 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 09:00:25.956790 containerd[2009]: time="2024-07-02T09:00:25.956261435Z" level=info msg="StartContainer for \"5f651076c14ab304ddb7c5efc196c56093c65049560bc3cb076862a1b99a74e4\" returns successfully" Jul 2 09:00:28.549255 kubelet[3252]: I0702 09:00:28.549020 3252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b46668f54-5ckds" podStartSLOduration=5.628681412 podStartE2EDuration="9.548947151s" podCreationTimestamp="2024-07-02 09:00:19 +0000 UTC" firstStartedPulling="2024-07-02 09:00:21.822885474 +0000 UTC m=+83.878229446" lastFinishedPulling="2024-07-02 09:00:25.743151225 +0000 UTC m=+87.798495185" observedRunningTime="2024-07-02 09:00:26.797689127 +0000 UTC m=+88.853033123" watchObservedRunningTime="2024-07-02 09:00:28.548947151 +0000 UTC m=+90.604291123" Jul 2 09:00:30.265765 systemd[1]: Started sshd@21-172.31.30.27:22-147.75.109.163:59456.service - OpenSSH per-connection server daemon (147.75.109.163:59456). Jul 2 09:00:30.454572 sshd[5949]: Accepted publickey for core from 147.75.109.163 port 59456 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:30.455719 sshd[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:30.471970 systemd-logind[1993]: New session 22 of user core. Jul 2 09:00:30.479700 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 09:00:30.782622 sshd[5949]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:30.793838 systemd[1]: sshd@21-172.31.30.27:22-147.75.109.163:59456.service: Deactivated successfully. Jul 2 09:00:30.802694 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 09:00:30.807294 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. Jul 2 09:00:30.809595 systemd-logind[1993]: Removed session 22. Jul 2 09:00:35.821005 systemd[1]: Started sshd@22-172.31.30.27:22-147.75.109.163:44526.service - OpenSSH per-connection server daemon (147.75.109.163:44526). Jul 2 09:00:35.998448 sshd[5991]: Accepted publickey for core from 147.75.109.163 port 44526 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:36.001168 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:36.009948 systemd-logind[1993]: New session 23 of user core. Jul 2 09:00:36.014745 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 09:00:36.269880 sshd[5991]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:36.276242 systemd[1]: sshd@22-172.31.30.27:22-147.75.109.163:44526.service: Deactivated successfully. Jul 2 09:00:36.281365 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 09:00:36.283629 systemd-logind[1993]: Session 23 logged out. Waiting for processes to exit. Jul 2 09:00:36.285520 systemd-logind[1993]: Removed session 23. Jul 2 09:00:41.303733 systemd[1]: Started sshd@23-172.31.30.27:22-147.75.109.163:44528.service - OpenSSH per-connection server daemon (147.75.109.163:44528). Jul 2 09:00:41.486995 sshd[6027]: Accepted publickey for core from 147.75.109.163 port 44528 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:41.490533 sshd[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:41.500785 systemd-logind[1993]: New session 24 of user core. Jul 2 09:00:41.506791 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 09:00:41.747887 sshd[6027]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:41.754189 systemd[1]: sshd@23-172.31.30.27:22-147.75.109.163:44528.service: Deactivated successfully. Jul 2 09:00:41.760060 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 09:00:41.761811 systemd-logind[1993]: Session 24 logged out. Waiting for processes to exit. Jul 2 09:00:41.763733 systemd-logind[1993]: Removed session 24. Jul 2 09:00:46.786016 systemd[1]: Started sshd@24-172.31.30.27:22-147.75.109.163:55392.service - OpenSSH per-connection server daemon (147.75.109.163:55392). Jul 2 09:00:46.970052 sshd[6049]: Accepted publickey for core from 147.75.109.163 port 55392 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:46.972718 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:46.982077 systemd-logind[1993]: New session 25 of user core. Jul 2 09:00:46.991907 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 09:00:47.242851 sshd[6049]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:47.247916 systemd[1]: sshd@24-172.31.30.27:22-147.75.109.163:55392.service: Deactivated successfully. Jul 2 09:00:47.253632 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 09:00:47.260237 systemd-logind[1993]: Session 25 logged out. Waiting for processes to exit. Jul 2 09:00:47.263980 systemd-logind[1993]: Removed session 25. Jul 2 09:00:52.289973 systemd[1]: Started sshd@25-172.31.30.27:22-147.75.109.163:55396.service - OpenSSH per-connection server daemon (147.75.109.163:55396). Jul 2 09:00:52.468455 sshd[6062]: Accepted publickey for core from 147.75.109.163 port 55396 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:52.471074 sshd[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:52.479560 systemd-logind[1993]: New session 26 of user core. Jul 2 09:00:52.484750 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 09:00:52.733856 sshd[6062]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:52.738856 systemd-logind[1993]: Session 26 logged out. Waiting for processes to exit. Jul 2 09:00:52.739738 systemd[1]: sshd@25-172.31.30.27:22-147.75.109.163:55396.service: Deactivated successfully. Jul 2 09:00:52.744138 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 09:00:52.749598 systemd-logind[1993]: Removed session 26. Jul 2 09:00:57.780011 systemd[1]: Started sshd@26-172.31.30.27:22-147.75.109.163:35274.service - OpenSSH per-connection server daemon (147.75.109.163:35274). Jul 2 09:00:57.969408 sshd[6099]: Accepted publickey for core from 147.75.109.163 port 35274 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:57.972072 sshd[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:57.982106 systemd-logind[1993]: New session 27 of user core. Jul 2 09:00:57.988764 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 09:00:58.235290 sshd[6099]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:58.244382 systemd[1]: sshd@26-172.31.30.27:22-147.75.109.163:35274.service: Deactivated successfully. Jul 2 09:00:58.249724 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 09:00:58.252953 systemd-logind[1993]: Session 27 logged out. Waiting for processes to exit. Jul 2 09:00:58.256840 systemd-logind[1993]: Removed session 27. Jul 2 09:01:12.266860 systemd[1]: cri-containerd-1eb5d1ce36cc65e0a54f2d501ea6af29b4402484fc22b0216306a7bac38f9e83.scope: Deactivated successfully. Jul 2 09:01:12.267349 systemd[1]: cri-containerd-1eb5d1ce36cc65e0a54f2d501ea6af29b4402484fc22b0216306a7bac38f9e83.scope: Consumed 9.971s CPU time. Jul 2 09:01:12.318145 containerd[2009]: time="2024-07-02T09:01:12.316065737Z" level=info msg="shim disconnected" id=1eb5d1ce36cc65e0a54f2d501ea6af29b4402484fc22b0216306a7bac38f9e83 namespace=k8s.io Jul 2 09:01:12.318145 containerd[2009]: time="2024-07-02T09:01:12.316174397Z" level=warning msg="cleaning up after shim disconnected" id=1eb5d1ce36cc65e0a54f2d501ea6af29b4402484fc22b0216306a7bac38f9e83 namespace=k8s.io Jul 2 09:01:12.318145 containerd[2009]: time="2024-07-02T09:01:12.317550905Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:01:12.319732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1eb5d1ce36cc65e0a54f2d501ea6af29b4402484fc22b0216306a7bac38f9e83-rootfs.mount: Deactivated successfully. Jul 2 09:01:12.341555 containerd[2009]: time="2024-07-02T09:01:12.341357261Z" level=warning msg="cleanup warnings time=\"2024-07-02T09:01:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 09:01:12.908372 kubelet[3252]: I0702 09:01:12.908296 3252 scope.go:117] "RemoveContainer" containerID="1eb5d1ce36cc65e0a54f2d501ea6af29b4402484fc22b0216306a7bac38f9e83" Jul 2 09:01:12.912242 containerd[2009]: time="2024-07-02T09:01:12.911912252Z" level=info msg="CreateContainer within sandbox \"179cd9668e4b970fc18ea3c1e2bdaf80517ae3969b186ee3e59c306dad459f94\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 2 09:01:12.936812 containerd[2009]: time="2024-07-02T09:01:12.936696584Z" level=info msg="CreateContainer within sandbox \"179cd9668e4b970fc18ea3c1e2bdaf80517ae3969b186ee3e59c306dad459f94\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"bfe21164ca6394ce3bdc549590d6232474791a4d0fc6b4a335d84d2f19e6b8f6\"" Jul 2 09:01:12.937736 containerd[2009]: time="2024-07-02T09:01:12.937370000Z" level=info msg="StartContainer for \"bfe21164ca6394ce3bdc549590d6232474791a4d0fc6b4a335d84d2f19e6b8f6\"" Jul 2 09:01:12.997823 systemd[1]: Started cri-containerd-bfe21164ca6394ce3bdc549590d6232474791a4d0fc6b4a335d84d2f19e6b8f6.scope - libcontainer container bfe21164ca6394ce3bdc549590d6232474791a4d0fc6b4a335d84d2f19e6b8f6. Jul 2 09:01:13.054550 containerd[2009]: time="2024-07-02T09:01:13.054453736Z" level=info msg="StartContainer for \"bfe21164ca6394ce3bdc549590d6232474791a4d0fc6b4a335d84d2f19e6b8f6\" returns successfully" Jul 2 09:01:13.215755 systemd[1]: cri-containerd-114c9f238211104a163f9e53dadf5c1ebd57e369d513a1c76715e1943b8afde6.scope: Deactivated successfully. Jul 2 09:01:13.216272 systemd[1]: cri-containerd-114c9f238211104a163f9e53dadf5c1ebd57e369d513a1c76715e1943b8afde6.scope: Consumed 4.408s CPU time, 20.6M memory peak, 0B memory swap peak. Jul 2 09:01:13.262604 containerd[2009]: time="2024-07-02T09:01:13.262299941Z" level=info msg="shim disconnected" id=114c9f238211104a163f9e53dadf5c1ebd57e369d513a1c76715e1943b8afde6 namespace=k8s.io Jul 2 09:01:13.262604 containerd[2009]: time="2024-07-02T09:01:13.262405109Z" level=warning msg="cleaning up after shim disconnected" id=114c9f238211104a163f9e53dadf5c1ebd57e369d513a1c76715e1943b8afde6 namespace=k8s.io Jul 2 09:01:13.262604 containerd[2009]: time="2024-07-02T09:01:13.262450589Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:01:13.316422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-114c9f238211104a163f9e53dadf5c1ebd57e369d513a1c76715e1943b8afde6-rootfs.mount: Deactivated successfully. Jul 2 09:01:13.912844 kubelet[3252]: I0702 09:01:13.912706 3252 scope.go:117] "RemoveContainer" containerID="114c9f238211104a163f9e53dadf5c1ebd57e369d513a1c76715e1943b8afde6" Jul 2 09:01:13.919021 containerd[2009]: time="2024-07-02T09:01:13.918752349Z" level=info msg="CreateContainer within sandbox \"4b9fb98d651fe162b59cbc038a96cd08944934ac230d4d31c8289d3d0d53cf40\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 09:01:13.945318 containerd[2009]: time="2024-07-02T09:01:13.945234957Z" level=info msg="CreateContainer within sandbox \"4b9fb98d651fe162b59cbc038a96cd08944934ac230d4d31c8289d3d0d53cf40\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"91cf8699d1d84dbfb904da908f7d27a927586aede3451382c25a464c92f9912e\"" Jul 2 09:01:13.947731 containerd[2009]: time="2024-07-02T09:01:13.947448141Z" level=info msg="StartContainer for \"91cf8699d1d84dbfb904da908f7d27a927586aede3451382c25a464c92f9912e\"" Jul 2 09:01:14.003786 systemd[1]: Started cri-containerd-91cf8699d1d84dbfb904da908f7d27a927586aede3451382c25a464c92f9912e.scope - libcontainer container 91cf8699d1d84dbfb904da908f7d27a927586aede3451382c25a464c92f9912e. Jul 2 09:01:14.076714 containerd[2009]: time="2024-07-02T09:01:14.076588710Z" level=info msg="StartContainer for \"91cf8699d1d84dbfb904da908f7d27a927586aede3451382c25a464c92f9912e\" returns successfully" Jul 2 09:01:17.878826 systemd[1]: cri-containerd-5a86384925f988f4bafa62040ef759bd9fb1f3cfb16a7282851dc064230b1d98.scope: Deactivated successfully. Jul 2 09:01:17.881679 systemd[1]: cri-containerd-5a86384925f988f4bafa62040ef759bd9fb1f3cfb16a7282851dc064230b1d98.scope: Consumed 4.179s CPU time, 16.3M memory peak, 0B memory swap peak. Jul 2 09:01:17.943330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a86384925f988f4bafa62040ef759bd9fb1f3cfb16a7282851dc064230b1d98-rootfs.mount: Deactivated successfully. Jul 2 09:01:17.944029 containerd[2009]: time="2024-07-02T09:01:17.943749229Z" level=info msg="shim disconnected" id=5a86384925f988f4bafa62040ef759bd9fb1f3cfb16a7282851dc064230b1d98 namespace=k8s.io Jul 2 09:01:17.944530 containerd[2009]: time="2024-07-02T09:01:17.944028217Z" level=warning msg="cleaning up after shim disconnected" id=5a86384925f988f4bafa62040ef759bd9fb1f3cfb16a7282851dc064230b1d98 namespace=k8s.io Jul 2 09:01:17.944530 containerd[2009]: time="2024-07-02T09:01:17.944051785Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:01:18.946308 kubelet[3252]: I0702 09:01:18.946255 3252 scope.go:117] "RemoveContainer" containerID="5a86384925f988f4bafa62040ef759bd9fb1f3cfb16a7282851dc064230b1d98" Jul 2 09:01:18.950518 containerd[2009]: time="2024-07-02T09:01:18.950435678Z" level=info msg="CreateContainer within sandbox \"66780602eed93f8054bc9d452389dcd1bb64110c7a9cf9990c0295312ab2a4ec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 09:01:18.984519 containerd[2009]: time="2024-07-02T09:01:18.983704634Z" level=info msg="CreateContainer within sandbox \"66780602eed93f8054bc9d452389dcd1bb64110c7a9cf9990c0295312ab2a4ec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"20f53b3988bc4843dcb1b0326717242bf0d6327ef264b343a1ea8eabaf52dba3\"" Jul 2 09:01:18.989242 containerd[2009]: time="2024-07-02T09:01:18.989180594Z" level=info msg="StartContainer for \"20f53b3988bc4843dcb1b0326717242bf0d6327ef264b343a1ea8eabaf52dba3\"" Jul 2 09:01:19.084813 systemd[1]: Started cri-containerd-20f53b3988bc4843dcb1b0326717242bf0d6327ef264b343a1ea8eabaf52dba3.scope - libcontainer container 20f53b3988bc4843dcb1b0326717242bf0d6327ef264b343a1ea8eabaf52dba3. Jul 2 09:01:19.166326 containerd[2009]: time="2024-07-02T09:01:19.166237619Z" level=info msg="StartContainer for \"20f53b3988bc4843dcb1b0326717242bf0d6327ef264b343a1ea8eabaf52dba3\" returns successfully" Jul 2 09:01:20.446580 kubelet[3252]: E0702 09:01:20.446354 3252 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-27?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 09:01:30.447162 kubelet[3252]: E0702 09:01:30.446908 3252 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-27?timeout=10s\": context deadline exceeded"