Jul 2 08:07:04.214004 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 2 08:07:04.214050 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 08:07:04.214075 kernel: KASLR disabled due to lack of seed Jul 2 08:07:04.214092 kernel: efi: EFI v2.7 by EDK II Jul 2 08:07:04.214107 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Jul 2 08:07:04.214123 kernel: ACPI: Early table checksum verification disabled Jul 2 08:07:04.214141 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 2 08:07:04.214156 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 2 08:07:04.214172 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 08:07:04.214188 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 2 08:07:04.214209 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 08:07:04.214258 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 2 08:07:04.214278 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 2 08:07:04.214296 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 2 08:07:04.214315 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 08:07:04.214339 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 2 08:07:04.214360 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 2 08:07:04.214377 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 2 08:07:04.214395 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 2 08:07:04.214414 kernel: printk: bootconsole [uart0] enabled Jul 2 08:07:04.214431 kernel: NUMA: Failed to initialise from firmware Jul 2 08:07:04.214449 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 08:07:04.214466 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 2 08:07:04.214483 kernel: Zone ranges: Jul 2 08:07:04.214500 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 2 08:07:04.214516 kernel: DMA32 empty Jul 2 08:07:04.214538 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 2 08:07:04.214555 kernel: Movable zone start for each node Jul 2 08:07:04.214571 kernel: Early memory node ranges Jul 2 08:07:04.214588 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 2 08:07:04.214620 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 2 08:07:04.214645 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 2 08:07:04.214662 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 2 08:07:04.214679 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 2 08:07:04.214696 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 2 08:07:04.214713 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 2 08:07:04.214730 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 2 08:07:04.214747 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 08:07:04.214772 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 2 08:07:04.214791 kernel: psci: probing for conduit method from ACPI. Jul 2 08:07:04.214819 kernel: psci: PSCIv1.0 detected in firmware. Jul 2 08:07:04.214838 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 08:07:04.214856 kernel: psci: Trusted OS migration not required Jul 2 08:07:04.214878 kernel: psci: SMC Calling Convention v1.1 Jul 2 08:07:04.214896 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 08:07:04.214914 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 08:07:04.214931 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 08:07:04.214949 kernel: Detected PIPT I-cache on CPU0 Jul 2 08:07:04.214966 kernel: CPU features: detected: GIC system register CPU interface Jul 2 08:07:04.214984 kernel: CPU features: detected: Spectre-v2 Jul 2 08:07:04.215001 kernel: CPU features: detected: Spectre-v3a Jul 2 08:07:04.215019 kernel: CPU features: detected: Spectre-BHB Jul 2 08:07:04.215037 kernel: CPU features: detected: ARM erratum 1742098 Jul 2 08:07:04.215054 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 2 08:07:04.215077 kernel: alternatives: applying boot alternatives Jul 2 08:07:04.215097 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:07:04.215116 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:07:04.215134 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 08:07:04.215152 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:07:04.215170 kernel: Fallback order for Node 0: 0 Jul 2 08:07:04.215188 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 2 08:07:04.215205 kernel: Policy zone: Normal Jul 2 08:07:04.217282 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:07:04.217325 kernel: software IO TLB: area num 2. Jul 2 08:07:04.217344 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 2 08:07:04.217375 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Jul 2 08:07:04.217394 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:07:04.217412 kernel: trace event string verifier disabled Jul 2 08:07:04.217431 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 08:07:04.217449 kernel: rcu: RCU event tracing is enabled. Jul 2 08:07:04.217467 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:07:04.217485 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 08:07:04.217503 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:07:04.217520 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:07:04.217538 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:07:04.217555 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 08:07:04.217578 kernel: GICv3: 96 SPIs implemented Jul 2 08:07:04.217597 kernel: GICv3: 0 Extended SPIs implemented Jul 2 08:07:04.217614 kernel: Root IRQ handler: gic_handle_irq Jul 2 08:07:04.217631 kernel: GICv3: GICv3 features: 16 PPIs Jul 2 08:07:04.217649 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 2 08:07:04.217666 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 2 08:07:04.217684 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 08:07:04.217702 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jul 2 08:07:04.217720 kernel: GICv3: using LPI property table @0x00000004000e0000 Jul 2 08:07:04.217738 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 2 08:07:04.217756 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jul 2 08:07:04.217773 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 08:07:04.217797 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 2 08:07:04.217816 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 2 08:07:04.217834 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 2 08:07:04.217852 kernel: Console: colour dummy device 80x25 Jul 2 08:07:04.217871 kernel: printk: console [tty1] enabled Jul 2 08:07:04.217889 kernel: ACPI: Core revision 20230628 Jul 2 08:07:04.217908 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 2 08:07:04.217926 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:07:04.217944 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 08:07:04.217963 kernel: SELinux: Initializing. Jul 2 08:07:04.217988 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:07:04.218007 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:07:04.218025 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:07:04.218043 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:07:04.218061 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:07:04.218080 kernel: rcu: Max phase no-delay instances is 400. Jul 2 08:07:04.218097 kernel: Platform MSI: ITS@0x10080000 domain created Jul 2 08:07:04.218115 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 2 08:07:04.218133 kernel: Remapping and enabling EFI services. Jul 2 08:07:04.218156 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:07:04.218174 kernel: Detected PIPT I-cache on CPU1 Jul 2 08:07:04.218192 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 2 08:07:04.218210 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jul 2 08:07:04.218273 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 2 08:07:04.218296 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:07:04.218314 kernel: SMP: Total of 2 processors activated. Jul 2 08:07:04.218332 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 08:07:04.218350 kernel: CPU features: detected: 32-bit EL1 Support Jul 2 08:07:04.218376 kernel: CPU features: detected: CRC32 instructions Jul 2 08:07:04.218394 kernel: CPU: All CPU(s) started at EL1 Jul 2 08:07:04.218426 kernel: alternatives: applying system-wide alternatives Jul 2 08:07:04.218450 kernel: devtmpfs: initialized Jul 2 08:07:04.218469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:07:04.218487 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:07:04.218506 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:07:04.218524 kernel: SMBIOS 3.0.0 present. Jul 2 08:07:04.218542 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 2 08:07:04.218566 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:07:04.218584 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 08:07:04.218617 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 08:07:04.218643 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 08:07:04.218662 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:07:04.218680 kernel: audit: type=2000 audit(0.301:1): state=initialized audit_enabled=0 res=1 Jul 2 08:07:04.218699 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:07:04.218724 kernel: cpuidle: using governor menu Jul 2 08:07:04.218743 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 08:07:04.218761 kernel: ASID allocator initialised with 65536 entries Jul 2 08:07:04.218779 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:07:04.218797 kernel: Serial: AMBA PL011 UART driver Jul 2 08:07:04.218815 kernel: Modules: 17600 pages in range for non-PLT usage Jul 2 08:07:04.218834 kernel: Modules: 509120 pages in range for PLT usage Jul 2 08:07:04.218852 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:07:04.218870 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 08:07:04.218893 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 08:07:04.218912 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 08:07:04.218931 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:07:04.218949 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 08:07:04.218967 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 08:07:04.218985 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 08:07:04.219004 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:07:04.219022 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:07:04.219040 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:07:04.219063 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:07:04.219081 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:07:04.219100 kernel: ACPI: Interpreter enabled Jul 2 08:07:04.219118 kernel: ACPI: Using GIC for interrupt routing Jul 2 08:07:04.219136 kernel: ACPI: MCFG table detected, 1 entries Jul 2 08:07:04.219154 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 2 08:07:04.219552 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:07:04.219775 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 08:07:04.220000 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 08:07:04.220212 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 2 08:07:04.220462 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 2 08:07:04.220492 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 2 08:07:04.220511 kernel: acpiphp: Slot [1] registered Jul 2 08:07:04.220530 kernel: acpiphp: Slot [2] registered Jul 2 08:07:04.220548 kernel: acpiphp: Slot [3] registered Jul 2 08:07:04.220568 kernel: acpiphp: Slot [4] registered Jul 2 08:07:04.220587 kernel: acpiphp: Slot [5] registered Jul 2 08:07:04.220615 kernel: acpiphp: Slot [6] registered Jul 2 08:07:04.222299 kernel: acpiphp: Slot [7] registered Jul 2 08:07:04.222334 kernel: acpiphp: Slot [8] registered Jul 2 08:07:04.222354 kernel: acpiphp: Slot [9] registered Jul 2 08:07:04.222373 kernel: acpiphp: Slot [10] registered Jul 2 08:07:04.222392 kernel: acpiphp: Slot [11] registered Jul 2 08:07:04.222412 kernel: acpiphp: Slot [12] registered Jul 2 08:07:04.222431 kernel: acpiphp: Slot [13] registered Jul 2 08:07:04.222450 kernel: acpiphp: Slot [14] registered Jul 2 08:07:04.222484 kernel: acpiphp: Slot [15] registered Jul 2 08:07:04.222503 kernel: acpiphp: Slot [16] registered Jul 2 08:07:04.222522 kernel: acpiphp: Slot [17] registered Jul 2 08:07:04.222542 kernel: acpiphp: Slot [18] registered Jul 2 08:07:04.222564 kernel: acpiphp: Slot [19] registered Jul 2 08:07:04.222583 kernel: acpiphp: Slot [20] registered Jul 2 08:07:04.222602 kernel: acpiphp: Slot [21] registered Jul 2 08:07:04.222646 kernel: acpiphp: Slot [22] registered Jul 2 08:07:04.222666 kernel: acpiphp: Slot [23] registered Jul 2 08:07:04.222685 kernel: acpiphp: Slot [24] registered Jul 2 08:07:04.222714 kernel: acpiphp: Slot [25] registered Jul 2 08:07:04.222733 kernel: acpiphp: Slot [26] registered Jul 2 08:07:04.222752 kernel: acpiphp: Slot [27] registered Jul 2 08:07:04.222771 kernel: acpiphp: Slot [28] registered Jul 2 08:07:04.222790 kernel: acpiphp: Slot [29] registered Jul 2 08:07:04.222809 kernel: acpiphp: Slot [30] registered Jul 2 08:07:04.222828 kernel: acpiphp: Slot [31] registered Jul 2 08:07:04.222846 kernel: PCI host bridge to bus 0000:00 Jul 2 08:07:04.223114 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 2 08:07:04.225001 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 08:07:04.225211 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 2 08:07:04.225545 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 2 08:07:04.225786 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 2 08:07:04.226015 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 2 08:07:04.226246 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 2 08:07:04.226493 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 08:07:04.226731 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 2 08:07:04.226942 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:07:04.227162 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 08:07:04.230383 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 2 08:07:04.230650 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 2 08:07:04.230866 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 2 08:07:04.231094 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:07:04.231344 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 2 08:07:04.231570 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 2 08:07:04.231793 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 2 08:07:04.232004 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 2 08:07:04.232299 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 2 08:07:04.232505 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 2 08:07:04.232704 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 08:07:04.232899 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 2 08:07:04.232927 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 08:07:04.232946 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 08:07:04.232965 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 08:07:04.232984 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 08:07:04.233002 kernel: iommu: Default domain type: Translated Jul 2 08:07:04.233021 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 08:07:04.233047 kernel: efivars: Registered efivars operations Jul 2 08:07:04.233066 kernel: vgaarb: loaded Jul 2 08:07:04.233084 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 08:07:04.233103 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:07:04.233122 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:07:04.233141 kernel: pnp: PnP ACPI init Jul 2 08:07:04.233421 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 2 08:07:04.233454 kernel: pnp: PnP ACPI: found 1 devices Jul 2 08:07:04.233481 kernel: NET: Registered PF_INET protocol family Jul 2 08:07:04.233500 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 08:07:04.233519 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 08:07:04.233538 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:07:04.233557 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:07:04.233575 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 08:07:04.233594 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 08:07:04.233613 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:07:04.233631 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:07:04.233655 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:07:04.233675 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:07:04.233693 kernel: kvm [1]: HYP mode not available Jul 2 08:07:04.233711 kernel: Initialise system trusted keyrings Jul 2 08:07:04.233730 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 08:07:04.233749 kernel: Key type asymmetric registered Jul 2 08:07:04.233767 kernel: Asymmetric key parser 'x509' registered Jul 2 08:07:04.233785 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 08:07:04.233804 kernel: io scheduler mq-deadline registered Jul 2 08:07:04.233827 kernel: io scheduler kyber registered Jul 2 08:07:04.233846 kernel: io scheduler bfq registered Jul 2 08:07:04.234059 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 2 08:07:04.234087 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 08:07:04.234106 kernel: ACPI: button: Power Button [PWRB] Jul 2 08:07:04.234125 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 2 08:07:04.234143 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 08:07:04.234162 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:07:04.234187 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 2 08:07:04.236531 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 2 08:07:04.236579 kernel: printk: console [ttyS0] disabled Jul 2 08:07:04.236616 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 2 08:07:04.236641 kernel: printk: console [ttyS0] enabled Jul 2 08:07:04.236661 kernel: printk: bootconsole [uart0] disabled Jul 2 08:07:04.236680 kernel: thunder_xcv, ver 1.0 Jul 2 08:07:04.236698 kernel: thunder_bgx, ver 1.0 Jul 2 08:07:04.236716 kernel: nicpf, ver 1.0 Jul 2 08:07:04.236735 kernel: nicvf, ver 1.0 Jul 2 08:07:04.236994 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 08:07:04.237194 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T08:07:03 UTC (1719907623) Jul 2 08:07:04.237241 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 08:07:04.237267 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 2 08:07:04.237286 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 08:07:04.237306 kernel: watchdog: Hard watchdog permanently disabled Jul 2 08:07:04.237325 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:07:04.237344 kernel: Segment Routing with IPv6 Jul 2 08:07:04.237371 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:07:04.237390 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:07:04.237408 kernel: Key type dns_resolver registered Jul 2 08:07:04.237428 kernel: registered taskstats version 1 Jul 2 08:07:04.237446 kernel: Loading compiled-in X.509 certificates Jul 2 08:07:04.237465 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 08:07:04.237483 kernel: Key type .fscrypt registered Jul 2 08:07:04.237501 kernel: Key type fscrypt-provisioning registered Jul 2 08:07:04.237519 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:07:04.237543 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:07:04.237561 kernel: ima: No architecture policies found Jul 2 08:07:04.237580 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 08:07:04.237598 kernel: clk: Disabling unused clocks Jul 2 08:07:04.237617 kernel: Freeing unused kernel memory: 39040K Jul 2 08:07:04.237635 kernel: Run /init as init process Jul 2 08:07:04.237653 kernel: with arguments: Jul 2 08:07:04.237672 kernel: /init Jul 2 08:07:04.237690 kernel: with environment: Jul 2 08:07:04.237713 kernel: HOME=/ Jul 2 08:07:04.237732 kernel: TERM=linux Jul 2 08:07:04.237750 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:07:04.237773 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:07:04.237796 systemd[1]: Detected virtualization amazon. Jul 2 08:07:04.237817 systemd[1]: Detected architecture arm64. Jul 2 08:07:04.237837 systemd[1]: Running in initrd. Jul 2 08:07:04.237856 systemd[1]: No hostname configured, using default hostname. Jul 2 08:07:04.237882 systemd[1]: Hostname set to . Jul 2 08:07:04.237903 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:07:04.237923 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:07:04.237943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:07:04.237963 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:07:04.237985 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 08:07:04.238005 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:07:04.238031 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 08:07:04.238052 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 08:07:04.238075 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 08:07:04.238096 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 08:07:04.238116 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:07:04.238136 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:07:04.238156 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:07:04.238182 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:07:04.238203 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:07:04.241286 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:07:04.241335 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:07:04.241356 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:07:04.241379 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:07:04.241399 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:07:04.241420 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:07:04.241441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:07:04.241474 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:07:04.241496 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:07:04.241517 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 08:07:04.241538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:07:04.241558 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 08:07:04.241579 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:07:04.241600 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:07:04.241620 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:07:04.241647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:07:04.241668 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 08:07:04.241688 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:07:04.241708 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:07:04.241730 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:07:04.241816 systemd-journald[250]: Collecting audit messages is disabled. Jul 2 08:07:04.241864 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:07:04.241885 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:07:04.241906 systemd-journald[250]: Journal started Jul 2 08:07:04.241950 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2cb4017a358d81a33f35dc938518fe) is 8.0M, max 75.3M, 67.3M free. Jul 2 08:07:04.207319 systemd-modules-load[252]: Inserted module 'overlay' Jul 2 08:07:04.253700 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:07:04.253765 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:07:04.254125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:07:04.262834 systemd-modules-load[252]: Inserted module 'br_netfilter' Jul 2 08:07:04.265406 kernel: Bridge firewalling registered Jul 2 08:07:04.266404 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:07:04.277630 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:07:04.287636 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:07:04.298772 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:07:04.310579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:07:04.339380 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:07:04.357647 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 08:07:04.360921 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:07:04.378294 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:07:04.396120 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:07:04.405446 dracut-cmdline[283]: dracut-dracut-053 Jul 2 08:07:04.415188 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:07:04.484844 systemd-resolved[292]: Positive Trust Anchors: Jul 2 08:07:04.484885 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:07:04.484949 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:07:04.603262 kernel: SCSI subsystem initialized Jul 2 08:07:04.609267 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:07:04.623279 kernel: iscsi: registered transport (tcp) Jul 2 08:07:04.647822 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:07:04.647898 kernel: QLogic iSCSI HBA Driver Jul 2 08:07:04.726271 kernel: random: crng init done Jul 2 08:07:04.726675 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 2 08:07:04.730465 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:07:04.747047 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:07:04.764350 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 08:07:04.773637 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 08:07:04.820033 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:07:04.820150 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:07:04.820184 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 08:07:04.894292 kernel: raid6: neonx8 gen() 6720 MB/s Jul 2 08:07:04.911287 kernel: raid6: neonx4 gen() 6492 MB/s Jul 2 08:07:04.928284 kernel: raid6: neonx2 gen() 5409 MB/s Jul 2 08:07:04.945291 kernel: raid6: neonx1 gen() 3925 MB/s Jul 2 08:07:04.962287 kernel: raid6: int64x8 gen() 3785 MB/s Jul 2 08:07:04.979283 kernel: raid6: int64x4 gen() 3673 MB/s Jul 2 08:07:04.996277 kernel: raid6: int64x2 gen() 3581 MB/s Jul 2 08:07:05.013987 kernel: raid6: int64x1 gen() 2743 MB/s Jul 2 08:07:05.014068 kernel: raid6: using algorithm neonx8 gen() 6720 MB/s Jul 2 08:07:05.031978 kernel: raid6: .... xor() 4783 MB/s, rmw enabled Jul 2 08:07:05.032062 kernel: raid6: using neon recovery algorithm Jul 2 08:07:05.040279 kernel: xor: measuring software checksum speed Jul 2 08:07:05.042268 kernel: 8regs : 11094 MB/sec Jul 2 08:07:05.044270 kernel: 32regs : 11997 MB/sec Jul 2 08:07:05.046270 kernel: arm64_neon : 9639 MB/sec Jul 2 08:07:05.046329 kernel: xor: using function: 32regs (11997 MB/sec) Jul 2 08:07:05.137289 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 08:07:05.161498 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:07:05.174709 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:07:05.220659 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jul 2 08:07:05.230876 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:07:05.244479 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 08:07:05.284506 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jul 2 08:07:05.350027 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:07:05.367568 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:07:05.494976 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:07:05.506560 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 08:07:05.571376 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 08:07:05.579572 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:07:05.583330 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:07:05.585920 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:07:05.597747 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 08:07:05.654865 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:07:05.736077 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 08:07:05.736153 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 2 08:07:05.767541 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 08:07:05.768060 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 08:07:05.768508 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:df:5d:a7:84:2f Jul 2 08:07:05.768966 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:07:05.769360 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:07:05.772499 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:07:05.775214 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:07:05.775543 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:07:05.779852 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:07:05.782734 (udev-worker)[512]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:07:05.810379 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 2 08:07:05.810475 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 08:07:05.808742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:07:05.823095 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 08:07:05.828275 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:07:05.828363 kernel: GPT:9289727 != 16777215 Jul 2 08:07:05.828391 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:07:05.830922 kernel: GPT:9289727 != 16777215 Jul 2 08:07:05.830994 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:07:05.831020 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:07:05.855385 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:07:05.868639 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:07:05.917712 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:07:05.951278 kernel: BTRFS: device fsid 9b0eb482-485a-4aff-8de4-e09ff146eadf devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (512) Jul 2 08:07:05.977299 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (539) Jul 2 08:07:06.007811 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 08:07:06.073085 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 08:07:06.112941 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 08:07:06.117999 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 08:07:06.134701 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 08:07:06.146685 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 08:07:06.163106 disk-uuid[658]: Primary Header is updated. Jul 2 08:07:06.163106 disk-uuid[658]: Secondary Entries is updated. Jul 2 08:07:06.163106 disk-uuid[658]: Secondary Header is updated. Jul 2 08:07:06.173299 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:07:06.184298 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:07:07.186403 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:07:07.191268 disk-uuid[659]: The operation has completed successfully. Jul 2 08:07:07.197296 kernel: block device autoloading is deprecated and will be removed. Jul 2 08:07:07.411983 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:07:07.412292 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 08:07:07.465612 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 08:07:07.489479 sh[920]: Success Jul 2 08:07:07.525327 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 08:07:07.674891 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 08:07:07.691539 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 08:07:07.704344 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 08:07:07.740286 kernel: BTRFS info (device dm-0): first mount of filesystem 9b0eb482-485a-4aff-8de4-e09ff146eadf Jul 2 08:07:07.740373 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:07:07.740401 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 08:07:07.742448 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 08:07:07.742528 kernel: BTRFS info (device dm-0): using free space tree Jul 2 08:07:07.863287 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 08:07:07.931282 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 08:07:07.935358 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 08:07:07.947569 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 08:07:07.954574 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 08:07:07.976259 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:07:07.976343 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:07:07.979052 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:07:07.986363 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:07:08.009337 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:07:08.010054 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:07:08.033203 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 08:07:08.046730 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 08:07:08.167344 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:07:08.179563 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:07:08.241681 systemd-networkd[1112]: lo: Link UP Jul 2 08:07:08.241710 systemd-networkd[1112]: lo: Gained carrier Jul 2 08:07:08.245604 systemd-networkd[1112]: Enumeration completed Jul 2 08:07:08.245821 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:07:08.246507 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:07:08.246514 systemd-networkd[1112]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:07:08.251590 systemd[1]: Reached target network.target - Network. Jul 2 08:07:08.252924 systemd-networkd[1112]: eth0: Link UP Jul 2 08:07:08.252934 systemd-networkd[1112]: eth0: Gained carrier Jul 2 08:07:08.252952 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:07:08.298387 systemd-networkd[1112]: eth0: DHCPv4 address 172.31.16.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 08:07:08.605750 ignition[1027]: Ignition 2.18.0 Jul 2 08:07:08.605791 ignition[1027]: Stage: fetch-offline Jul 2 08:07:08.607541 ignition[1027]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:08.607576 ignition[1027]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:08.609128 ignition[1027]: Ignition finished successfully Jul 2 08:07:08.615818 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:07:08.631642 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 08:07:08.660036 ignition[1124]: Ignition 2.18.0 Jul 2 08:07:08.660064 ignition[1124]: Stage: fetch Jul 2 08:07:08.661855 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:08.661888 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:08.662054 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:08.691441 ignition[1124]: PUT result: OK Jul 2 08:07:08.698854 ignition[1124]: parsed url from cmdline: "" Jul 2 08:07:08.698887 ignition[1124]: no config URL provided Jul 2 08:07:08.698905 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:07:08.698935 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:07:08.698979 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:08.705018 ignition[1124]: PUT result: OK Jul 2 08:07:08.705202 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 08:07:08.712210 ignition[1124]: GET result: OK Jul 2 08:07:08.712819 ignition[1124]: parsing config with SHA512: a17b04b3f85e3f68dffbcdc0474f7b6fa1bcfc87191ae8a03eda253ea169e2d02f67c72aba571875c238834972a065b47435bdd317cb2083a58c8334682ebe51 Jul 2 08:07:08.722885 unknown[1124]: fetched base config from "system" Jul 2 08:07:08.723160 unknown[1124]: fetched base config from "system" Jul 2 08:07:08.723853 ignition[1124]: fetch: fetch complete Jul 2 08:07:08.723176 unknown[1124]: fetched user config from "aws" Jul 2 08:07:08.723868 ignition[1124]: fetch: fetch passed Jul 2 08:07:08.724026 ignition[1124]: Ignition finished successfully Jul 2 08:07:08.737324 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 08:07:08.747521 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 08:07:08.787617 ignition[1131]: Ignition 2.18.0 Jul 2 08:07:08.787650 ignition[1131]: Stage: kargs Jul 2 08:07:08.789339 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:08.789371 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:08.790475 ignition[1131]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:08.796940 ignition[1131]: PUT result: OK Jul 2 08:07:08.801982 ignition[1131]: kargs: kargs passed Jul 2 08:07:08.803048 ignition[1131]: Ignition finished successfully Jul 2 08:07:08.807395 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 08:07:08.817610 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 08:07:08.855047 ignition[1139]: Ignition 2.18.0 Jul 2 08:07:08.855655 ignition[1139]: Stage: disks Jul 2 08:07:08.856416 ignition[1139]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:08.856450 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:08.856610 ignition[1139]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:08.859247 ignition[1139]: PUT result: OK Jul 2 08:07:08.869517 ignition[1139]: disks: disks passed Jul 2 08:07:08.869658 ignition[1139]: Ignition finished successfully Jul 2 08:07:08.876332 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 08:07:08.879207 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 08:07:08.882722 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:07:08.887291 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:07:08.891245 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:07:08.897128 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:07:08.909642 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 08:07:08.993169 systemd-fsck[1148]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 08:07:09.003582 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 08:07:09.015502 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 08:07:09.110259 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9aacfbff-cef8-4758-afb5-6310e7c6c5e6 r/w with ordered data mode. Quota mode: none. Jul 2 08:07:09.112689 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 08:07:09.116321 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 08:07:09.195528 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:07:09.210312 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 08:07:09.214893 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 08:07:09.230906 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:07:09.239590 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1167) Jul 2 08:07:09.230974 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:07:09.241149 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 08:07:09.250013 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:07:09.250112 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:07:09.251285 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:07:09.253807 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 08:07:09.262292 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:07:09.269958 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:07:09.310379 systemd-networkd[1112]: eth0: Gained IPv6LL Jul 2 08:07:09.857902 initrd-setup-root[1191]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:07:09.866543 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:07:09.875019 initrd-setup-root[1205]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:07:09.883919 initrd-setup-root[1212]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:07:10.235291 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 08:07:10.247471 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 08:07:10.259572 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 08:07:10.278508 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 08:07:10.280492 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:07:10.307175 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 08:07:10.325658 ignition[1281]: INFO : Ignition 2.18.0 Jul 2 08:07:10.325658 ignition[1281]: INFO : Stage: mount Jul 2 08:07:10.328709 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:10.328709 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:10.332614 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:10.335629 ignition[1281]: INFO : PUT result: OK Jul 2 08:07:10.340474 ignition[1281]: INFO : mount: mount passed Jul 2 08:07:10.342326 ignition[1281]: INFO : Ignition finished successfully Jul 2 08:07:10.345969 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 08:07:10.357513 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 08:07:10.384471 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:07:10.403257 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1292) Jul 2 08:07:10.407384 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:07:10.407428 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:07:10.407455 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:07:10.412259 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:07:10.415983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:07:10.458184 ignition[1309]: INFO : Ignition 2.18.0 Jul 2 08:07:10.458184 ignition[1309]: INFO : Stage: files Jul 2 08:07:10.461322 ignition[1309]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:10.461322 ignition[1309]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:10.461322 ignition[1309]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:10.468453 ignition[1309]: INFO : PUT result: OK Jul 2 08:07:10.471926 ignition[1309]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:07:10.474193 ignition[1309]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:07:10.474193 ignition[1309]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:07:10.508635 ignition[1309]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:07:10.511174 ignition[1309]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:07:10.511174 ignition[1309]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:07:10.511064 unknown[1309]: wrote ssh authorized keys file for user: core Jul 2 08:07:10.519388 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:07:10.519388 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 08:07:10.581117 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:07:10.706044 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:07:10.709473 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:07:10.709473 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:07:10.709473 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:07:10.709473 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:07:10.709473 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:07:10.709473 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:07:10.709473 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:07:10.709473 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:07:10.709473 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:07:10.738994 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:07:10.738994 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:07:10.738994 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:07:10.738994 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:07:10.738994 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jul 2 08:07:11.192295 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 08:07:11.557612 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:07:11.557612 ignition[1309]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 08:07:11.566467 ignition[1309]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:07:11.566467 ignition[1309]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:07:11.566467 ignition[1309]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 08:07:11.566467 ignition[1309]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:07:11.566467 ignition[1309]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:07:11.566467 ignition[1309]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:07:11.566467 ignition[1309]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:07:11.566467 ignition[1309]: INFO : files: files passed Jul 2 08:07:11.566467 ignition[1309]: INFO : Ignition finished successfully Jul 2 08:07:11.577762 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 08:07:11.600699 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 08:07:11.617692 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 08:07:11.632969 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:07:11.635456 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 08:07:11.649740 initrd-setup-root-after-ignition[1338]: grep: Jul 2 08:07:11.649740 initrd-setup-root-after-ignition[1342]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:07:11.654153 initrd-setup-root-after-ignition[1338]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:07:11.654153 initrd-setup-root-after-ignition[1338]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:07:11.659313 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:07:11.663609 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 08:07:11.687431 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 08:07:11.748514 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:07:11.750354 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 08:07:11.756717 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 08:07:11.758819 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 08:07:11.764437 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 08:07:11.778623 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 08:07:11.805837 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:07:11.819534 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 08:07:11.843044 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:07:11.847199 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:07:11.849539 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 08:07:11.851325 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:07:11.851562 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:07:11.854149 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 08:07:11.856458 systemd[1]: Stopped target basic.target - Basic System. Jul 2 08:07:11.867910 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 08:07:11.870086 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:07:11.874298 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 08:07:11.876648 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 08:07:11.884345 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:07:11.884704 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 08:07:11.892821 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 08:07:11.895177 systemd[1]: Stopped target swap.target - Swaps. Jul 2 08:07:11.896884 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:07:11.897168 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:07:11.905721 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:07:11.908456 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:07:11.910759 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 08:07:11.916214 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:07:11.922795 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:07:11.923036 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 08:07:11.925832 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:07:11.926725 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:07:11.934264 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:07:11.934532 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 08:07:11.952107 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 08:07:11.954933 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:07:11.955293 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:07:11.967666 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 08:07:11.971110 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:07:11.971502 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:07:11.974566 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:07:11.975690 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:07:11.995113 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:07:11.997511 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 08:07:12.017455 ignition[1363]: INFO : Ignition 2.18.0 Jul 2 08:07:12.017455 ignition[1363]: INFO : Stage: umount Jul 2 08:07:12.020717 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:07:12.022539 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:07:12.022539 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:07:12.027257 ignition[1363]: INFO : PUT result: OK Jul 2 08:07:12.032206 ignition[1363]: INFO : umount: umount passed Jul 2 08:07:12.034016 ignition[1363]: INFO : Ignition finished successfully Jul 2 08:07:12.038147 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:07:12.040122 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 08:07:12.046167 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:07:12.048886 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:07:12.049014 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 08:07:12.052548 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:07:12.052661 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 08:07:12.058881 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:07:12.058999 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 08:07:12.062122 systemd[1]: Stopped target network.target - Network. Jul 2 08:07:12.067245 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:07:12.067355 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:07:12.069993 systemd[1]: Stopped target paths.target - Path Units. Jul 2 08:07:12.078031 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:07:12.079047 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:07:12.082798 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 08:07:12.086649 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 08:07:12.089092 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:07:12.089187 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:07:12.091793 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:07:12.091871 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:07:12.093765 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:07:12.093874 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 08:07:12.096569 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 08:07:12.096655 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 08:07:12.103190 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 08:07:12.108327 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 08:07:12.111582 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:07:12.111820 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 08:07:12.118344 systemd-networkd[1112]: eth0: DHCPv6 lease lost Jul 2 08:07:12.120683 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:07:12.120877 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 08:07:12.130787 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:07:12.131415 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 08:07:12.137719 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:07:12.138372 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 08:07:12.150796 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:07:12.150919 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:07:12.172511 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 08:07:12.176529 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:07:12.176645 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:07:12.180531 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:07:12.180633 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:07:12.190662 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:07:12.190773 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 08:07:12.192799 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 08:07:12.192881 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:07:12.195197 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:07:12.220053 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:07:12.221677 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 08:07:12.233637 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:07:12.234133 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:07:12.241112 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:07:12.241247 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 08:07:12.246756 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:07:12.246831 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:07:12.248731 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:07:12.248815 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:07:12.250815 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:07:12.250910 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 08:07:12.253122 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:07:12.253251 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:07:12.271708 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 08:07:12.281598 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:07:12.281724 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:07:12.284124 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:07:12.284275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:07:12.316161 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:07:12.318338 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 08:07:12.322723 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 08:07:12.339627 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 08:07:12.357711 systemd[1]: Switching root. Jul 2 08:07:12.412636 systemd-journald[250]: Journal stopped Jul 2 08:07:15.164993 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jul 2 08:07:15.165118 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:07:15.165165 kernel: SELinux: policy capability open_perms=1 Jul 2 08:07:15.165198 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:07:15.169439 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:07:15.169492 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:07:15.169540 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:07:15.169609 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:07:15.169645 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:07:15.169676 kernel: audit: type=1403 audit(1719907633.580:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:07:15.169718 systemd[1]: Successfully loaded SELinux policy in 55.854ms. Jul 2 08:07:15.169765 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.103ms. Jul 2 08:07:15.169801 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:07:15.169834 systemd[1]: Detected virtualization amazon. Jul 2 08:07:15.169866 systemd[1]: Detected architecture arm64. Jul 2 08:07:15.169909 systemd[1]: Detected first boot. Jul 2 08:07:15.169943 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:07:15.169976 zram_generator::config[1405]: No configuration found. Jul 2 08:07:15.170010 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:07:15.170042 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:07:15.170074 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 08:07:15.170107 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:07:15.170138 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 08:07:15.170174 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 08:07:15.170205 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 08:07:15.170262 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 08:07:15.170297 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 08:07:15.170328 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 08:07:15.170360 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 08:07:15.170392 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 08:07:15.170421 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:07:15.170454 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:07:15.170491 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 08:07:15.170523 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 08:07:15.170555 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 08:07:15.170586 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:07:15.170644 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 08:07:15.170678 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:07:15.170711 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 08:07:15.170744 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 08:07:15.170776 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 08:07:15.170813 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 08:07:15.170848 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:07:15.170879 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:07:15.170912 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:07:15.170943 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:07:15.170973 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 08:07:15.171003 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 08:07:15.171034 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:07:15.171068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:07:15.171098 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:07:15.171130 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 08:07:15.171159 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 08:07:15.171189 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 08:07:15.171218 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 08:07:15.194275 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 08:07:15.194313 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 08:07:15.194366 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 08:07:15.194416 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:07:15.194450 systemd[1]: Reached target machines.target - Containers. Jul 2 08:07:15.194483 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 08:07:15.194516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:07:15.194546 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:07:15.194578 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 08:07:15.194632 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:07:15.194667 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:07:15.194705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:07:15.194735 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 08:07:15.194765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:07:15.194795 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:07:15.194825 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:07:15.194856 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 08:07:15.194890 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:07:15.194921 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:07:15.194952 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:07:15.194986 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:07:15.195016 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 08:07:15.195046 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 08:07:15.195079 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:07:15.195109 kernel: fuse: init (API version 7.39) Jul 2 08:07:15.195139 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:07:15.195168 systemd[1]: Stopped verity-setup.service. Jul 2 08:07:15.195197 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 08:07:15.195252 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 08:07:15.195291 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 08:07:15.195365 systemd-journald[1482]: Collecting audit messages is disabled. Jul 2 08:07:15.195416 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 08:07:15.195448 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 08:07:15.195485 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 08:07:15.195516 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:07:15.195545 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:07:15.195574 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 08:07:15.195606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:07:15.195639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:07:15.195668 systemd-journald[1482]: Journal started Jul 2 08:07:15.195721 systemd-journald[1482]: Runtime Journal (/run/log/journal/ec2cb4017a358d81a33f35dc938518fe) is 8.0M, max 75.3M, 67.3M free. Jul 2 08:07:14.672880 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:07:15.217989 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:07:14.720946 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 08:07:14.721896 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:07:15.203962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:07:15.205433 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:07:15.209018 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:07:15.209358 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 08:07:15.211921 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 08:07:15.231742 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 08:07:15.242381 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 08:07:15.245402 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:07:15.245460 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:07:15.255629 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 08:07:15.277512 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 08:07:15.286563 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 08:07:15.288656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:07:15.296769 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 08:07:15.317969 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 08:07:15.320118 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:07:15.325577 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 08:07:15.344140 kernel: loop: module loaded Jul 2 08:07:15.332526 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 08:07:15.342807 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:07:15.343143 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:07:15.347456 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:07:15.350662 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 08:07:15.353489 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 08:07:15.356440 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 08:07:15.360480 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 08:07:15.408322 systemd-journald[1482]: Time spent on flushing to /var/log/journal/ec2cb4017a358d81a33f35dc938518fe is 118.275ms for 899 entries. Jul 2 08:07:15.408322 systemd-journald[1482]: System Journal (/var/log/journal/ec2cb4017a358d81a33f35dc938518fe) is 8.0M, max 195.6M, 187.6M free. Jul 2 08:07:15.556322 systemd-journald[1482]: Received client request to flush runtime journal. Jul 2 08:07:15.556388 kernel: ACPI: bus type drm_connector registered Jul 2 08:07:15.556422 kernel: loop0: detected capacity change from 0 to 194096 Jul 2 08:07:15.556455 kernel: block loop0: the capability attribute has been deprecated. Jul 2 08:07:15.556755 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:07:15.411074 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 08:07:15.416461 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:07:15.438506 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:07:15.467551 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:07:15.467899 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:07:15.513017 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 08:07:15.515520 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 08:07:15.526514 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 08:07:15.542169 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 08:07:15.558736 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 08:07:15.571159 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 08:07:15.598369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:07:15.609900 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 08:07:15.613657 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:07:15.625272 kernel: loop1: detected capacity change from 0 to 59672 Jul 2 08:07:15.661804 udevadm[1548]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 08:07:15.684105 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:07:15.691312 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 08:07:15.694126 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 08:07:15.704628 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:07:15.762367 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Jul 2 08:07:15.762407 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Jul 2 08:07:15.778827 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:07:15.784319 kernel: loop2: detected capacity change from 0 to 51896 Jul 2 08:07:15.888279 kernel: loop3: detected capacity change from 0 to 113672 Jul 2 08:07:16.021261 kernel: loop4: detected capacity change from 0 to 194096 Jul 2 08:07:16.046275 kernel: loop5: detected capacity change from 0 to 59672 Jul 2 08:07:16.061272 kernel: loop6: detected capacity change from 0 to 51896 Jul 2 08:07:16.071705 kernel: loop7: detected capacity change from 0 to 113672 Jul 2 08:07:16.084994 (sd-merge)[1559]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 08:07:16.087018 (sd-merge)[1559]: Merged extensions into '/usr'. Jul 2 08:07:16.097130 systemd[1]: Reloading requested from client PID 1508 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 08:07:16.097377 systemd[1]: Reloading... Jul 2 08:07:16.227286 zram_generator::config[1581]: No configuration found. Jul 2 08:07:16.636526 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:07:16.756434 systemd[1]: Reloading finished in 658 ms. Jul 2 08:07:16.802292 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 08:07:16.805425 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 08:07:16.820563 systemd[1]: Starting ensure-sysext.service... Jul 2 08:07:16.837679 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:07:16.844654 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:07:16.875548 systemd[1]: Reloading requested from client PID 1635 ('systemctl') (unit ensure-sysext.service)... Jul 2 08:07:16.875577 systemd[1]: Reloading... Jul 2 08:07:16.893480 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:07:16.894111 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 08:07:16.896642 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:07:16.897434 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Jul 2 08:07:16.897574 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Jul 2 08:07:16.913010 systemd-tmpfiles[1636]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:07:16.913200 systemd-tmpfiles[1636]: Skipping /boot Jul 2 08:07:16.945200 systemd-tmpfiles[1636]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:07:16.945410 systemd-tmpfiles[1636]: Skipping /boot Jul 2 08:07:16.971112 systemd-udevd[1637]: Using default interface naming scheme 'v255'. Jul 2 08:07:17.105425 zram_generator::config[1678]: No configuration found. Jul 2 08:07:17.172343 ldconfig[1501]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:07:17.199488 (udev-worker)[1674]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:07:17.227656 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1664) Jul 2 08:07:17.487425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:07:17.515317 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1685) Jul 2 08:07:17.648146 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 08:07:17.649531 systemd[1]: Reloading finished in 773 ms. Jul 2 08:07:17.679325 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:07:17.682502 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 08:07:17.686286 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:07:17.782276 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 08:07:17.808746 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 08:07:17.819614 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:07:17.824620 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 08:07:17.826999 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:07:17.834742 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 08:07:17.842604 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:07:17.848114 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:07:17.853050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:07:17.859562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:07:17.861666 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:07:17.866693 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 08:07:17.874588 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 08:07:17.885965 lvm[1834]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:07:17.884613 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:07:17.900520 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:07:17.903443 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 08:07:17.923842 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 08:07:17.932422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:07:17.937360 systemd[1]: Finished ensure-sysext.service. Jul 2 08:07:17.974945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:07:17.975532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:07:17.978519 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:07:17.994365 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 08:07:18.021307 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 08:07:18.025159 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:07:18.025574 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:07:18.031056 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:07:18.032635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:07:18.036458 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:07:18.037436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:07:18.041109 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 08:07:18.065439 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:07:18.077679 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 08:07:18.080432 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:07:18.089075 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 08:07:18.127707 lvm[1865]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:07:18.136376 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 08:07:18.148627 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 08:07:18.151826 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 08:07:18.156741 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:07:18.171264 augenrules[1876]: No rules Jul 2 08:07:18.175979 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:07:18.209268 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 08:07:18.216428 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 08:07:18.222957 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:07:18.232734 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 08:07:18.331722 systemd-networkd[1842]: lo: Link UP Jul 2 08:07:18.331741 systemd-networkd[1842]: lo: Gained carrier Jul 2 08:07:18.334536 systemd-networkd[1842]: Enumeration completed Jul 2 08:07:18.334773 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:07:18.336830 systemd-networkd[1842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:07:18.336839 systemd-networkd[1842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:07:18.339287 systemd-networkd[1842]: eth0: Link UP Jul 2 08:07:18.339652 systemd-networkd[1842]: eth0: Gained carrier Jul 2 08:07:18.339688 systemd-networkd[1842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:07:18.344588 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 08:07:18.352723 systemd-resolved[1843]: Positive Trust Anchors: Jul 2 08:07:18.352761 systemd-resolved[1843]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:07:18.352824 systemd-resolved[1843]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:07:18.356386 systemd-networkd[1842]: eth0: DHCPv4 address 172.31.16.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 08:07:18.363695 systemd-resolved[1843]: Defaulting to hostname 'linux'. Jul 2 08:07:18.368998 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:07:18.371349 systemd[1]: Reached target network.target - Network. Jul 2 08:07:18.373049 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:07:18.375400 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:07:18.377445 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 08:07:18.379656 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 08:07:18.382204 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 08:07:18.384414 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 08:07:18.386618 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 08:07:18.388775 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:07:18.388826 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:07:18.390442 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:07:18.393299 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 08:07:18.398298 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 08:07:18.407702 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 08:07:18.410806 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 08:07:18.412913 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:07:18.414665 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:07:18.416501 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:07:18.416567 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:07:18.424504 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 08:07:18.429694 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 08:07:18.437640 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 08:07:18.449195 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 08:07:18.456306 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 08:07:18.458478 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 08:07:18.474765 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 08:07:18.477631 jq[1899]: false Jul 2 08:07:18.482496 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 08:07:18.495644 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 08:07:18.506666 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 08:07:18.520674 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 08:07:18.537766 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 08:07:18.548598 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 08:07:18.553402 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:07:18.554412 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:07:18.564136 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 08:07:18.568959 dbus-daemon[1898]: [system] SELinux support is enabled Jul 2 08:07:18.575405 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 08:07:18.583010 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 08:07:18.594704 dbus-daemon[1898]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1842 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 08:07:18.608077 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:07:18.608459 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 08:07:18.640717 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:07:18.642916 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 08:07:18.650373 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:07:18.650542 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 08:07:18.655543 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:07:18.655585 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 08:07:18.664711 (ntainerd)[1920]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 08:07:18.672307 dbus-daemon[1898]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 08:07:18.693439 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 08:07:18.695931 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:07:18.697368 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 08:07:18.709733 update_engine[1912]: I0702 08:07:18.709583 1912 main.cc:92] Flatcar Update Engine starting Jul 2 08:07:18.712667 systemd[1]: Started update-engine.service - Update Engine. Jul 2 08:07:18.713741 update_engine[1912]: I0702 08:07:18.713475 1912 update_check_scheduler.cc:74] Next update check in 10m9s Jul 2 08:07:18.724312 extend-filesystems[1900]: Found loop4 Jul 2 08:07:18.724312 extend-filesystems[1900]: Found loop5 Jul 2 08:07:18.724312 extend-filesystems[1900]: Found loop6 Jul 2 08:07:18.724312 extend-filesystems[1900]: Found loop7 Jul 2 08:07:18.724312 extend-filesystems[1900]: Found nvme0n1 Jul 2 08:07:18.724312 extend-filesystems[1900]: Found nvme0n1p1 Jul 2 08:07:18.724312 extend-filesystems[1900]: Found nvme0n1p2 Jul 2 08:07:18.724312 extend-filesystems[1900]: Found nvme0n1p3 Jul 2 08:07:18.724312 extend-filesystems[1900]: Found usr Jul 2 08:07:18.724312 extend-filesystems[1900]: Found nvme0n1p4 Jul 2 08:07:18.724312 extend-filesystems[1900]: Found nvme0n1p6 Jul 2 08:07:18.724312 extend-filesystems[1900]: Found nvme0n1p7 Jul 2 08:07:18.762799 extend-filesystems[1900]: Found nvme0n1p9 Jul 2 08:07:18.762799 extend-filesystems[1900]: Checking size of /dev/nvme0n1p9 Jul 2 08:07:18.773451 jq[1913]: true Jul 2 08:07:18.728613 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 08:07:18.773773 tar[1917]: linux-arm64/helm Jul 2 08:07:18.811915 ntpd[1902]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 08:07:18.821559 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 08:07:18.821559 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 08:07:18.821559 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: ---------------------------------------------------- Jul 2 08:07:18.821559 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: ntp-4 is maintained by Network Time Foundation, Jul 2 08:07:18.821559 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 08:07:18.821559 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: corporation. Support and training for ntp-4 are Jul 2 08:07:18.821559 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: available at https://www.nwtime.org/support Jul 2 08:07:18.821559 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: ---------------------------------------------------- Jul 2 08:07:18.821559 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: proto: precision = 0.108 usec (-23) Jul 2 08:07:18.811980 ntpd[1902]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 08:07:18.812000 ntpd[1902]: ---------------------------------------------------- Jul 2 08:07:18.822924 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: basedate set to 2024-06-19 Jul 2 08:07:18.822924 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: gps base set to 2024-06-23 (week 2320) Jul 2 08:07:18.812020 ntpd[1902]: ntp-4 is maintained by Network Time Foundation, Jul 2 08:07:18.812039 ntpd[1902]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 08:07:18.836481 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 08:07:18.836481 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 08:07:18.812057 ntpd[1902]: corporation. Support and training for ntp-4 are Jul 2 08:07:18.812075 ntpd[1902]: available at https://www.nwtime.org/support Jul 2 08:07:18.812094 ntpd[1902]: ---------------------------------------------------- Jul 2 08:07:18.815769 ntpd[1902]: proto: precision = 0.108 usec (-23) Jul 2 08:07:18.822727 ntpd[1902]: basedate set to 2024-06-19 Jul 2 08:07:18.822758 ntpd[1902]: gps base set to 2024-06-23 (week 2320) Jul 2 08:07:18.836120 ntpd[1902]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 08:07:18.836198 ntpd[1902]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 08:07:18.845633 ntpd[1902]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 08:07:18.848980 jq[1938]: true Jul 2 08:07:18.856871 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 08:07:18.856871 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: Listen normally on 3 eth0 172.31.16.163:123 Jul 2 08:07:18.856871 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: Listen normally on 4 lo [::1]:123 Jul 2 08:07:18.856871 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: bind(21) AF_INET6 fe80::4df:5dff:fea7:842f%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:07:18.856871 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: unable to create socket on eth0 (5) for fe80::4df:5dff:fea7:842f%2#123 Jul 2 08:07:18.856871 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: failed to init interface for address fe80::4df:5dff:fea7:842f%2 Jul 2 08:07:18.856871 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: Listening on routing socket on fd #21 for interface updates Jul 2 08:07:18.845723 ntpd[1902]: Listen normally on 3 eth0 172.31.16.163:123 Jul 2 08:07:18.845801 ntpd[1902]: Listen normally on 4 lo [::1]:123 Jul 2 08:07:18.845877 ntpd[1902]: bind(21) AF_INET6 fe80::4df:5dff:fea7:842f%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:07:18.845914 ntpd[1902]: unable to create socket on eth0 (5) for fe80::4df:5dff:fea7:842f%2#123 Jul 2 08:07:18.845945 ntpd[1902]: failed to init interface for address fe80::4df:5dff:fea7:842f%2 Jul 2 08:07:18.845997 ntpd[1902]: Listening on routing socket on fd #21 for interface updates Jul 2 08:07:18.871286 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:07:18.871286 ntpd[1902]: 2 Jul 08:07:18 ntpd[1902]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:07:18.870855 ntpd[1902]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:07:18.870907 ntpd[1902]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:07:18.882774 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 08:07:18.892590 extend-filesystems[1900]: Resized partition /dev/nvme0n1p9 Jul 2 08:07:18.903336 extend-filesystems[1952]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 08:07:18.913252 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 08:07:18.971255 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 08:07:19.011890 systemd-logind[1911]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 08:07:19.011938 systemd-logind[1911]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 2 08:07:19.013738 systemd-logind[1911]: New seat seat0. Jul 2 08:07:19.019710 extend-filesystems[1952]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 08:07:19.019710 extend-filesystems[1952]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 08:07:19.019710 extend-filesystems[1952]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 08:07:19.015206 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 08:07:19.026134 extend-filesystems[1900]: Resized filesystem in /dev/nvme0n1p9 Jul 2 08:07:19.025004 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:07:19.025364 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 08:07:19.060237 coreos-metadata[1897]: Jul 02 08:07:19.059 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 08:07:19.063358 coreos-metadata[1897]: Jul 02 08:07:19.060 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 08:07:19.065351 coreos-metadata[1897]: Jul 02 08:07:19.065 INFO Fetch successful Jul 2 08:07:19.065351 coreos-metadata[1897]: Jul 02 08:07:19.065 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 08:07:19.073519 coreos-metadata[1897]: Jul 02 08:07:19.071 INFO Fetch successful Jul 2 08:07:19.073519 coreos-metadata[1897]: Jul 02 08:07:19.071 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 08:07:19.073519 coreos-metadata[1897]: Jul 02 08:07:19.073 INFO Fetch successful Jul 2 08:07:19.073519 coreos-metadata[1897]: Jul 02 08:07:19.073 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 08:07:19.074460 coreos-metadata[1897]: Jul 02 08:07:19.074 INFO Fetch successful Jul 2 08:07:19.074460 coreos-metadata[1897]: Jul 02 08:07:19.074 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 08:07:19.075522 coreos-metadata[1897]: Jul 02 08:07:19.075 INFO Fetch failed with 404: resource not found Jul 2 08:07:19.075522 coreos-metadata[1897]: Jul 02 08:07:19.075 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 08:07:19.087300 coreos-metadata[1897]: Jul 02 08:07:19.083 INFO Fetch successful Jul 2 08:07:19.087300 coreos-metadata[1897]: Jul 02 08:07:19.083 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 08:07:19.084936 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 08:07:19.087567 bash[1976]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:07:19.090833 coreos-metadata[1897]: Jul 02 08:07:19.087 INFO Fetch successful Jul 2 08:07:19.090833 coreos-metadata[1897]: Jul 02 08:07:19.088 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 08:07:19.094468 coreos-metadata[1897]: Jul 02 08:07:19.094 INFO Fetch successful Jul 2 08:07:19.094468 coreos-metadata[1897]: Jul 02 08:07:19.094 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 08:07:19.096556 coreos-metadata[1897]: Jul 02 08:07:19.095 INFO Fetch successful Jul 2 08:07:19.096556 coreos-metadata[1897]: Jul 02 08:07:19.095 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 08:07:19.098499 coreos-metadata[1897]: Jul 02 08:07:19.098 INFO Fetch successful Jul 2 08:07:19.114415 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1702) Jul 2 08:07:19.119151 systemd[1]: Starting sshkeys.service... Jul 2 08:07:19.160635 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 08:07:19.172174 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 08:07:19.184564 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 08:07:19.279969 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 08:07:19.284643 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 08:07:19.375364 dbus-daemon[1898]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 08:07:19.375676 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 08:07:19.387832 dbus-daemon[1898]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1932 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 08:07:19.425562 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 08:07:19.494812 locksmithd[1934]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:07:19.503519 polkitd[2025]: Started polkitd version 121 Jul 2 08:07:19.595857 polkitd[2025]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 08:07:19.595995 polkitd[2025]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 08:07:19.617659 coreos-metadata[1988]: Jul 02 08:07:19.617 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 08:07:19.619986 containerd[1920]: time="2024-07-02T08:07:19.618596664Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 08:07:19.621140 polkitd[2025]: Finished loading, compiling and executing 2 rules Jul 2 08:07:19.622363 coreos-metadata[1988]: Jul 02 08:07:19.622 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 08:07:19.623218 coreos-metadata[1988]: Jul 02 08:07:19.623 INFO Fetch successful Jul 2 08:07:19.623600 coreos-metadata[1988]: Jul 02 08:07:19.623 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 08:07:19.625178 coreos-metadata[1988]: Jul 02 08:07:19.624 INFO Fetch successful Jul 2 08:07:19.632618 unknown[1988]: wrote ssh authorized keys file for user: core Jul 2 08:07:19.633051 dbus-daemon[1898]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 08:07:19.634723 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 08:07:19.644455 polkitd[2025]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 08:07:19.703352 update-ssh-keys[2085]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:07:19.706634 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 08:07:19.715143 systemd[1]: Finished sshkeys.service. Jul 2 08:07:19.739720 systemd-hostnamed[1932]: Hostname set to (transient) Jul 2 08:07:19.739904 systemd-resolved[1843]: System hostname changed to 'ip-172-31-16-163'. Jul 2 08:07:19.802121 containerd[1920]: time="2024-07-02T08:07:19.801451416Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 08:07:19.802121 containerd[1920]: time="2024-07-02T08:07:19.801523764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:19.812799 ntpd[1902]: bind(24) AF_INET6 fe80::4df:5dff:fea7:842f%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:07:19.812869 ntpd[1902]: unable to create socket on eth0 (6) for fe80::4df:5dff:fea7:842f%2#123 Jul 2 08:07:19.813340 ntpd[1902]: 2 Jul 08:07:19 ntpd[1902]: bind(24) AF_INET6 fe80::4df:5dff:fea7:842f%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:07:19.813340 ntpd[1902]: 2 Jul 08:07:19 ntpd[1902]: unable to create socket on eth0 (6) for fe80::4df:5dff:fea7:842f%2#123 Jul 2 08:07:19.813340 ntpd[1902]: 2 Jul 08:07:19 ntpd[1902]: failed to init interface for address fe80::4df:5dff:fea7:842f%2 Jul 2 08:07:19.812898 ntpd[1902]: failed to init interface for address fe80::4df:5dff:fea7:842f%2 Jul 2 08:07:19.819260 containerd[1920]: time="2024-07-02T08:07:19.817670557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:07:19.819260 containerd[1920]: time="2024-07-02T08:07:19.817736485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:19.819260 containerd[1920]: time="2024-07-02T08:07:19.818117161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:07:19.819260 containerd[1920]: time="2024-07-02T08:07:19.818166421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:07:19.819260 containerd[1920]: time="2024-07-02T08:07:19.818515405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:19.819260 containerd[1920]: time="2024-07-02T08:07:19.818713765Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:07:19.819260 containerd[1920]: time="2024-07-02T08:07:19.818750665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:19.819260 containerd[1920]: time="2024-07-02T08:07:19.818907481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:19.820191 containerd[1920]: time="2024-07-02T08:07:19.820090429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:19.820755 containerd[1920]: time="2024-07-02T08:07:19.820689289Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:07:19.820916 containerd[1920]: time="2024-07-02T08:07:19.820886533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:07:19.821718 containerd[1920]: time="2024-07-02T08:07:19.821666509Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:07:19.821889 containerd[1920]: time="2024-07-02T08:07:19.821816809Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:07:19.822216 containerd[1920]: time="2024-07-02T08:07:19.822087205Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:07:19.822216 containerd[1920]: time="2024-07-02T08:07:19.822146857Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.872619085Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.872706385Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.872737225Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.872800117Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.872833237Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.872857249Z" level=info msg="NRI interface is disabled by configuration." Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.872885401Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.873123673Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.873158173Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.873188257Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.873261061Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.873303985Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.873342445Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:07:19.874254 containerd[1920]: time="2024-07-02T08:07:19.873372433Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:07:19.874901 containerd[1920]: time="2024-07-02T08:07:19.873401449Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:07:19.874901 containerd[1920]: time="2024-07-02T08:07:19.873437461Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:07:19.874901 containerd[1920]: time="2024-07-02T08:07:19.873470845Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:07:19.874901 containerd[1920]: time="2024-07-02T08:07:19.873499849Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:07:19.874901 containerd[1920]: time="2024-07-02T08:07:19.873526429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:07:19.874901 containerd[1920]: time="2024-07-02T08:07:19.873719737Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:07:19.874901 containerd[1920]: time="2024-07-02T08:07:19.874104229Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:07:19.874901 containerd[1920]: time="2024-07-02T08:07:19.874148053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.874901 containerd[1920]: time="2024-07-02T08:07:19.874180765Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 08:07:19.875945 containerd[1920]: time="2024-07-02T08:07:19.875624329Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:07:19.875945 containerd[1920]: time="2024-07-02T08:07:19.875802673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.876244 containerd[1920]: time="2024-07-02T08:07:19.875836321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.876244 containerd[1920]: time="2024-07-02T08:07:19.876096037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.876244 containerd[1920]: time="2024-07-02T08:07:19.876156961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.876244 containerd[1920]: time="2024-07-02T08:07:19.876193261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.876614 containerd[1920]: time="2024-07-02T08:07:19.876449113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.876614 containerd[1920]: time="2024-07-02T08:07:19.876488845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.876614 containerd[1920]: time="2024-07-02T08:07:19.876544837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.876614 containerd[1920]: time="2024-07-02T08:07:19.876580669Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:07:19.877491 containerd[1920]: time="2024-07-02T08:07:19.877163233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.877491 containerd[1920]: time="2024-07-02T08:07:19.877249309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.877491 containerd[1920]: time="2024-07-02T08:07:19.877418425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.877491 containerd[1920]: time="2024-07-02T08:07:19.877454137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.878091 containerd[1920]: time="2024-07-02T08:07:19.877783813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.878091 containerd[1920]: time="2024-07-02T08:07:19.877826509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.878091 containerd[1920]: time="2024-07-02T08:07:19.877884565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.878091 containerd[1920]: time="2024-07-02T08:07:19.877913377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:07:19.879616 containerd[1920]: time="2024-07-02T08:07:19.879366301Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:07:19.879616 containerd[1920]: time="2024-07-02T08:07:19.879530233Z" level=info msg="Connect containerd service" Jul 2 08:07:19.880239 containerd[1920]: time="2024-07-02T08:07:19.879939589Z" level=info msg="using legacy CRI server" Jul 2 08:07:19.880239 containerd[1920]: time="2024-07-02T08:07:19.879967909Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 08:07:19.880522 containerd[1920]: time="2024-07-02T08:07:19.880377733Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:07:19.882487 containerd[1920]: time="2024-07-02T08:07:19.882336865Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:07:19.882487 containerd[1920]: time="2024-07-02T08:07:19.882434881Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:07:19.883089 containerd[1920]: time="2024-07-02T08:07:19.882786589Z" level=info msg="Start subscribing containerd event" Jul 2 08:07:19.883089 containerd[1920]: time="2024-07-02T08:07:19.882887869Z" level=info msg="Start recovering state" Jul 2 08:07:19.883947 containerd[1920]: time="2024-07-02T08:07:19.883046113Z" level=info msg="Start event monitor" Jul 2 08:07:19.883947 containerd[1920]: time="2024-07-02T08:07:19.883443949Z" level=info msg="Start snapshots syncer" Jul 2 08:07:19.883947 containerd[1920]: time="2024-07-02T08:07:19.883474285Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:07:19.883947 containerd[1920]: time="2024-07-02T08:07:19.883494313Z" level=info msg="Start streaming server" Jul 2 08:07:19.884261 containerd[1920]: time="2024-07-02T08:07:19.884197885Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 08:07:19.884369 containerd[1920]: time="2024-07-02T08:07:19.884341357Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:07:19.884481 containerd[1920]: time="2024-07-02T08:07:19.884451745Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 08:07:19.885004 containerd[1920]: time="2024-07-02T08:07:19.884824981Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:07:19.885004 containerd[1920]: time="2024-07-02T08:07:19.884944501Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:07:19.885321 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 08:07:19.888607 containerd[1920]: time="2024-07-02T08:07:19.885373345Z" level=info msg="containerd successfully booted in 0.277231s" Jul 2 08:07:20.253432 systemd-networkd[1842]: eth0: Gained IPv6LL Jul 2 08:07:20.263710 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 08:07:20.267873 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 08:07:20.280346 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 08:07:20.296391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:07:20.305725 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 08:07:20.422515 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 08:07:20.428207 amazon-ssm-agent[2105]: Initializing new seelog logger Jul 2 08:07:20.428755 amazon-ssm-agent[2105]: New Seelog Logger Creation Complete Jul 2 08:07:20.428755 amazon-ssm-agent[2105]: 2024/07/02 08:07:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:20.428755 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:20.430105 amazon-ssm-agent[2105]: 2024/07/02 08:07:20 processing appconfig overrides Jul 2 08:07:20.434070 amazon-ssm-agent[2105]: 2024/07/02 08:07:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:20.434070 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:20.434070 amazon-ssm-agent[2105]: 2024/07/02 08:07:20 processing appconfig overrides Jul 2 08:07:20.434532 amazon-ssm-agent[2105]: 2024/07/02 08:07:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:20.434532 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:20.434532 amazon-ssm-agent[2105]: 2024/07/02 08:07:20 processing appconfig overrides Jul 2 08:07:20.436151 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO Proxy environment variables: Jul 2 08:07:20.443282 amazon-ssm-agent[2105]: 2024/07/02 08:07:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:20.443282 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:07:20.443282 amazon-ssm-agent[2105]: 2024/07/02 08:07:20 processing appconfig overrides Jul 2 08:07:20.514488 tar[1917]: linux-arm64/LICENSE Jul 2 08:07:20.514488 tar[1917]: linux-arm64/README.md Jul 2 08:07:20.537448 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO https_proxy: Jul 2 08:07:20.564835 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 08:07:20.638919 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO http_proxy: Jul 2 08:07:20.737923 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO no_proxy: Jul 2 08:07:20.835617 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO Checking if agent identity type OnPrem can be assumed Jul 2 08:07:20.934089 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO Checking if agent identity type EC2 can be assumed Jul 2 08:07:21.032742 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO Agent will take identity from EC2 Jul 2 08:07:21.105147 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:07:21.105147 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:07:21.105147 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:07:21.105147 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 08:07:21.105147 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 2 08:07:21.105147 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 08:07:21.105828 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 08:07:21.105828 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO [Registrar] Starting registrar module Jul 2 08:07:21.105828 amazon-ssm-agent[2105]: 2024-07-02 08:07:20 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 08:07:21.105828 amazon-ssm-agent[2105]: 2024-07-02 08:07:21 INFO [EC2Identity] EC2 registration was successful. Jul 2 08:07:21.105828 amazon-ssm-agent[2105]: 2024-07-02 08:07:21 INFO [CredentialRefresher] credentialRefresher has started Jul 2 08:07:21.105828 amazon-ssm-agent[2105]: 2024-07-02 08:07:21 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 08:07:21.105828 amazon-ssm-agent[2105]: 2024-07-02 08:07:21 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 08:07:21.133092 amazon-ssm-agent[2105]: 2024-07-02 08:07:21 INFO [CredentialRefresher] Next credential rotation will be in 31.183319818933334 minutes Jul 2 08:07:21.479602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:07:21.492727 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:07:21.670841 sshd_keygen[1942]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:07:21.717313 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 08:07:21.728804 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 08:07:21.737833 systemd[1]: Started sshd@0-172.31.16.163:22-139.178.89.65:39670.service - OpenSSH per-connection server daemon (139.178.89.65:39670). Jul 2 08:07:21.770874 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:07:21.774415 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 08:07:21.787927 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 08:07:21.818744 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 08:07:21.832727 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 08:07:21.839084 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 08:07:21.842121 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 08:07:21.843990 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 08:07:21.846605 systemd[1]: Startup finished in 1.258s (kernel) + 9.774s (initrd) + 8.320s (userspace) = 19.353s. Jul 2 08:07:21.951883 sshd[2146]: Accepted publickey for core from 139.178.89.65 port 39670 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:21.955025 sshd[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:21.974897 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 08:07:21.981890 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 08:07:21.992999 systemd-logind[1911]: New session 1 of user core. Jul 2 08:07:22.024282 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 08:07:22.033949 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 08:07:22.050719 (systemd)[2161]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:22.134496 amazon-ssm-agent[2105]: 2024-07-02 08:07:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 08:07:22.235466 amazon-ssm-agent[2105]: 2024-07-02 08:07:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2169) started Jul 2 08:07:22.303990 systemd[2161]: Queued start job for default target default.target. Jul 2 08:07:22.311989 systemd[2161]: Created slice app.slice - User Application Slice. Jul 2 08:07:22.312697 systemd[2161]: Reached target paths.target - Paths. Jul 2 08:07:22.312922 systemd[2161]: Reached target timers.target - Timers. Jul 2 08:07:22.322481 systemd[2161]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 08:07:22.335880 amazon-ssm-agent[2105]: 2024-07-02 08:07:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 08:07:22.358973 systemd[2161]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 08:07:22.359766 systemd[2161]: Reached target sockets.target - Sockets. Jul 2 08:07:22.360031 systemd[2161]: Reached target basic.target - Basic System. Jul 2 08:07:22.360290 systemd[2161]: Reached target default.target - Main User Target. Jul 2 08:07:22.360360 systemd[2161]: Startup finished in 297ms. Jul 2 08:07:22.360484 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 08:07:22.369019 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 08:07:22.453505 kubelet[2132]: E0702 08:07:22.453434 2132 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:07:22.459543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:07:22.461150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:07:22.463403 systemd[1]: kubelet.service: Consumed 1.304s CPU time. Jul 2 08:07:22.526945 systemd[1]: Started sshd@1-172.31.16.163:22-139.178.89.65:48432.service - OpenSSH per-connection server daemon (139.178.89.65:48432). Jul 2 08:07:22.716215 sshd[2185]: Accepted publickey for core from 139.178.89.65 port 48432 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:22.720491 sshd[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:22.730842 systemd-logind[1911]: New session 2 of user core. Jul 2 08:07:22.743507 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 08:07:22.812622 ntpd[1902]: Listen normally on 7 eth0 [fe80::4df:5dff:fea7:842f%2]:123 Jul 2 08:07:22.813118 ntpd[1902]: 2 Jul 08:07:22 ntpd[1902]: Listen normally on 7 eth0 [fe80::4df:5dff:fea7:842f%2]:123 Jul 2 08:07:22.870876 sshd[2185]: pam_unix(sshd:session): session closed for user core Jul 2 08:07:22.876419 systemd[1]: sshd@1-172.31.16.163:22-139.178.89.65:48432.service: Deactivated successfully. Jul 2 08:07:22.880120 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:07:22.888991 systemd-logind[1911]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:07:22.891017 systemd-logind[1911]: Removed session 2. Jul 2 08:07:22.911715 systemd[1]: Started sshd@2-172.31.16.163:22-139.178.89.65:48434.service - OpenSSH per-connection server daemon (139.178.89.65:48434). Jul 2 08:07:23.090573 sshd[2192]: Accepted publickey for core from 139.178.89.65 port 48434 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:23.092593 sshd[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:23.100259 systemd-logind[1911]: New session 3 of user core. Jul 2 08:07:23.109488 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 08:07:23.229526 sshd[2192]: pam_unix(sshd:session): session closed for user core Jul 2 08:07:23.236175 systemd[1]: sshd@2-172.31.16.163:22-139.178.89.65:48434.service: Deactivated successfully. Jul 2 08:07:23.239712 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:07:23.240844 systemd-logind[1911]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:07:23.242540 systemd-logind[1911]: Removed session 3. Jul 2 08:07:23.265749 systemd[1]: Started sshd@3-172.31.16.163:22-139.178.89.65:48436.service - OpenSSH per-connection server daemon (139.178.89.65:48436). Jul 2 08:07:23.443018 sshd[2199]: Accepted publickey for core from 139.178.89.65 port 48436 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:23.445580 sshd[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:23.452915 systemd-logind[1911]: New session 4 of user core. Jul 2 08:07:23.460506 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 08:07:23.586861 sshd[2199]: pam_unix(sshd:session): session closed for user core Jul 2 08:07:23.593047 systemd[1]: sshd@3-172.31.16.163:22-139.178.89.65:48436.service: Deactivated successfully. Jul 2 08:07:23.597366 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:07:23.598849 systemd-logind[1911]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:07:23.600738 systemd-logind[1911]: Removed session 4. Jul 2 08:07:23.626816 systemd[1]: Started sshd@4-172.31.16.163:22-139.178.89.65:48450.service - OpenSSH per-connection server daemon (139.178.89.65:48450). Jul 2 08:07:23.797883 sshd[2206]: Accepted publickey for core from 139.178.89.65 port 48450 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:23.800970 sshd[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:23.809731 systemd-logind[1911]: New session 5 of user core. Jul 2 08:07:23.816507 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 08:07:23.949783 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 08:07:23.950337 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:07:23.968818 sudo[2209]: pam_unix(sudo:session): session closed for user root Jul 2 08:07:23.992593 sshd[2206]: pam_unix(sshd:session): session closed for user core Jul 2 08:07:23.999130 systemd[1]: sshd@4-172.31.16.163:22-139.178.89.65:48450.service: Deactivated successfully. Jul 2 08:07:24.003687 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:07:24.005944 systemd-logind[1911]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:07:24.008176 systemd-logind[1911]: Removed session 5. Jul 2 08:07:24.029927 systemd[1]: Started sshd@5-172.31.16.163:22-139.178.89.65:48454.service - OpenSSH per-connection server daemon (139.178.89.65:48454). Jul 2 08:07:24.208138 sshd[2214]: Accepted publickey for core from 139.178.89.65 port 48454 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:24.211163 sshd[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:24.219351 systemd-logind[1911]: New session 6 of user core. Jul 2 08:07:24.227510 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 08:07:24.330305 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 08:07:24.330876 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:07:24.337197 sudo[2218]: pam_unix(sudo:session): session closed for user root Jul 2 08:07:24.347374 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 08:07:24.347888 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:07:24.371744 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 08:07:24.382977 auditctl[2221]: No rules Jul 2 08:07:24.383795 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 08:07:24.384187 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 08:07:24.395911 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:07:24.439792 augenrules[2239]: No rules Jul 2 08:07:24.442756 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:07:24.446660 sudo[2217]: pam_unix(sudo:session): session closed for user root Jul 2 08:07:24.469534 sshd[2214]: pam_unix(sshd:session): session closed for user core Jul 2 08:07:24.478145 systemd[1]: sshd@5-172.31.16.163:22-139.178.89.65:48454.service: Deactivated successfully. Jul 2 08:07:24.482776 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:07:24.484694 systemd-logind[1911]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:07:24.486671 systemd-logind[1911]: Removed session 6. Jul 2 08:07:24.508739 systemd[1]: Started sshd@6-172.31.16.163:22-139.178.89.65:48460.service - OpenSSH per-connection server daemon (139.178.89.65:48460). Jul 2 08:07:24.690983 sshd[2247]: Accepted publickey for core from 139.178.89.65 port 48460 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:07:24.693445 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:07:24.702466 systemd-logind[1911]: New session 7 of user core. Jul 2 08:07:24.705513 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 08:07:24.810609 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:07:24.811121 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:07:25.032723 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 08:07:25.036147 (dockerd)[2261]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 08:07:25.411481 dockerd[2261]: time="2024-07-02T08:07:25.411372136Z" level=info msg="Starting up" Jul 2 08:07:26.007173 dockerd[2261]: time="2024-07-02T08:07:26.006735140Z" level=info msg="Loading containers: start." Jul 2 08:07:26.194262 kernel: Initializing XFRM netlink socket Jul 2 08:07:26.235094 (udev-worker)[2275]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:07:26.322121 systemd-networkd[1842]: docker0: Link UP Jul 2 08:07:26.340778 dockerd[2261]: time="2024-07-02T08:07:26.340640479Z" level=info msg="Loading containers: done." Jul 2 08:07:26.443769 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3971758386-merged.mount: Deactivated successfully. Jul 2 08:07:26.454285 dockerd[2261]: time="2024-07-02T08:07:26.453645803Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:07:26.454285 dockerd[2261]: time="2024-07-02T08:07:26.454012670Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 08:07:26.455480 dockerd[2261]: time="2024-07-02T08:07:26.455004918Z" level=info msg="Daemon has completed initialization" Jul 2 08:07:26.501633 dockerd[2261]: time="2024-07-02T08:07:26.501557823Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:07:26.502075 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 08:07:27.343438 containerd[1920]: time="2024-07-02T08:07:27.343305476Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 08:07:28.019651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3722361730.mount: Deactivated successfully. Jul 2 08:07:30.674287 containerd[1920]: time="2024-07-02T08:07:30.673956533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:30.676154 containerd[1920]: time="2024-07-02T08:07:30.676082075Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=29940430" Jul 2 08:07:30.677787 containerd[1920]: time="2024-07-02T08:07:30.677255157Z" level=info msg="ImageCreate event name:\"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:30.684645 containerd[1920]: time="2024-07-02T08:07:30.684589695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:30.686874 containerd[1920]: time="2024-07-02T08:07:30.686816532Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"29937230\" in 3.343433209s" Jul 2 08:07:30.687056 containerd[1920]: time="2024-07-02T08:07:30.687025952Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\"" Jul 2 08:07:30.727388 containerd[1920]: time="2024-07-02T08:07:30.727341032Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 08:07:32.642961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:07:32.655066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:07:33.473024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:07:33.483964 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:07:33.582727 kubelet[2465]: E0702 08:07:33.582621 2465 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:07:33.587283 containerd[1920]: time="2024-07-02T08:07:33.585511131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:33.591986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:07:33.593451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:07:33.594022 containerd[1920]: time="2024-07-02T08:07:33.593951097Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=26881371" Jul 2 08:07:33.595070 containerd[1920]: time="2024-07-02T08:07:33.595024145Z" level=info msg="ImageCreate event name:\"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:33.605556 containerd[1920]: time="2024-07-02T08:07:33.605472499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:33.610108 containerd[1920]: time="2024-07-02T08:07:33.610043939Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"28368865\" in 2.882461418s" Jul 2 08:07:33.610324 containerd[1920]: time="2024-07-02T08:07:33.610110224Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\"" Jul 2 08:07:33.651003 containerd[1920]: time="2024-07-02T08:07:33.650658076Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 08:07:35.685288 containerd[1920]: time="2024-07-02T08:07:35.684901340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:35.687182 containerd[1920]: time="2024-07-02T08:07:35.687115967Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=16155688" Jul 2 08:07:35.688489 containerd[1920]: time="2024-07-02T08:07:35.688404559Z" level=info msg="ImageCreate event name:\"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:35.695277 containerd[1920]: time="2024-07-02T08:07:35.694194116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:35.697290 containerd[1920]: time="2024-07-02T08:07:35.696718666Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"17643200\" in 2.04600158s" Jul 2 08:07:35.697290 containerd[1920]: time="2024-07-02T08:07:35.696784074Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\"" Jul 2 08:07:35.739939 containerd[1920]: time="2024-07-02T08:07:35.739858229Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 08:07:37.235812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2256008757.mount: Deactivated successfully. Jul 2 08:07:37.867725 containerd[1920]: time="2024-07-02T08:07:37.867650900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:37.873954 containerd[1920]: time="2024-07-02T08:07:37.873859839Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=25634092" Jul 2 08:07:37.887034 containerd[1920]: time="2024-07-02T08:07:37.886964737Z" level=info msg="ImageCreate event name:\"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:37.900513 containerd[1920]: time="2024-07-02T08:07:37.900310282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:37.902682 containerd[1920]: time="2024-07-02T08:07:37.901378912Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"25633111\" in 2.160965021s" Jul 2 08:07:37.902682 containerd[1920]: time="2024-07-02T08:07:37.901437670Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\"" Jul 2 08:07:37.951253 containerd[1920]: time="2024-07-02T08:07:37.948579457Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 08:07:38.568618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1503364727.mount: Deactivated successfully. Jul 2 08:07:39.660863 containerd[1920]: time="2024-07-02T08:07:39.660217260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:39.662569 containerd[1920]: time="2024-07-02T08:07:39.662475853Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jul 2 08:07:39.663774 containerd[1920]: time="2024-07-02T08:07:39.663688591Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:39.669817 containerd[1920]: time="2024-07-02T08:07:39.669715603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:39.672376 containerd[1920]: time="2024-07-02T08:07:39.672143552Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.723497642s" Jul 2 08:07:39.672376 containerd[1920]: time="2024-07-02T08:07:39.672207448Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 08:07:39.715299 containerd[1920]: time="2024-07-02T08:07:39.715127434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:07:40.208285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117503840.mount: Deactivated successfully. Jul 2 08:07:40.219560 containerd[1920]: time="2024-07-02T08:07:40.219484682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:40.221154 containerd[1920]: time="2024-07-02T08:07:40.221102491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jul 2 08:07:40.222583 containerd[1920]: time="2024-07-02T08:07:40.222494706Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:40.227182 containerd[1920]: time="2024-07-02T08:07:40.227071502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:40.229356 containerd[1920]: time="2024-07-02T08:07:40.228847537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 513.660962ms" Jul 2 08:07:40.229356 containerd[1920]: time="2024-07-02T08:07:40.228907267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 08:07:40.268425 containerd[1920]: time="2024-07-02T08:07:40.268363942Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 08:07:40.839758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430349476.mount: Deactivated successfully. Jul 2 08:07:43.642605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:07:43.648613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:07:44.106069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:07:44.118749 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:07:44.195581 kubelet[2604]: E0702 08:07:44.195474 2604 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:07:44.201029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:07:44.201438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:07:45.570288 containerd[1920]: time="2024-07-02T08:07:45.569525503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:45.572210 containerd[1920]: time="2024-07-02T08:07:45.572135055Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jul 2 08:07:45.574609 containerd[1920]: time="2024-07-02T08:07:45.574513912Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:45.582415 containerd[1920]: time="2024-07-02T08:07:45.582328509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:07:45.584838 containerd[1920]: time="2024-07-02T08:07:45.584786546Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 5.316360005s" Jul 2 08:07:45.585125 containerd[1920]: time="2024-07-02T08:07:45.584979950Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jul 2 08:07:49.776781 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 08:07:53.649634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:07:53.660767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:07:53.709527 systemd[1]: Reloading requested from client PID 2686 ('systemctl') (unit session-7.scope)... Jul 2 08:07:53.709744 systemd[1]: Reloading... Jul 2 08:07:53.937896 zram_generator::config[2730]: No configuration found. Jul 2 08:07:54.186192 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:07:54.354455 systemd[1]: Reloading finished in 643 ms. Jul 2 08:07:54.452846 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:07:54.454489 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:07:54.455021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:07:54.467337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:07:54.780428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:07:54.796162 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:07:54.874392 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:07:54.874894 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:07:54.874998 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:07:54.875321 kubelet[2789]: I0702 08:07:54.875272 2789 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:07:56.611650 kubelet[2789]: I0702 08:07:56.611601 2789 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:07:56.613450 kubelet[2789]: I0702 08:07:56.612282 2789 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:07:56.613450 kubelet[2789]: I0702 08:07:56.612635 2789 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:07:56.635874 kubelet[2789]: E0702 08:07:56.635826 2789 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.163:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:56.639091 kubelet[2789]: I0702 08:07:56.638846 2789 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:07:56.657753 kubelet[2789]: I0702 08:07:56.657699 2789 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:07:56.661262 kubelet[2789]: I0702 08:07:56.661127 2789 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:07:56.661557 kubelet[2789]: I0702 08:07:56.661242 2789 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:07:56.661761 kubelet[2789]: I0702 08:07:56.661596 2789 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:07:56.661761 kubelet[2789]: I0702 08:07:56.661618 2789 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:07:56.661916 kubelet[2789]: I0702 08:07:56.661884 2789 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:07:56.663494 kubelet[2789]: I0702 08:07:56.663428 2789 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:07:56.663494 kubelet[2789]: I0702 08:07:56.663485 2789 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:07:56.663690 kubelet[2789]: I0702 08:07:56.663608 2789 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:07:56.663690 kubelet[2789]: I0702 08:07:56.663683 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:07:56.665501 kubelet[2789]: I0702 08:07:56.665446 2789 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:07:56.666259 kubelet[2789]: I0702 08:07:56.665850 2789 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:07:56.666259 kubelet[2789]: W0702 08:07:56.665992 2789 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:07:56.667245 kubelet[2789]: I0702 08:07:56.667161 2789 server.go:1264] "Started kubelet" Jul 2 08:07:56.667532 kubelet[2789]: W0702 08:07:56.667445 2789 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:56.667644 kubelet[2789]: E0702 08:07:56.667549 2789 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:56.676282 kubelet[2789]: I0702 08:07:56.675914 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:07:56.681166 kubelet[2789]: E0702 08:07:56.680112 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.163:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.163:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-163.17de56eb623602f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-163,UID:ip-172-31-16-163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-163,},FirstTimestamp:2024-07-02 08:07:56.667118327 +0000 UTC m=+1.863466173,LastTimestamp:2024-07-02 08:07:56.667118327 +0000 UTC m=+1.863466173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-163,}" Jul 2 08:07:56.683963 kubelet[2789]: W0702 08:07:56.683677 2789 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-163&limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:56.683963 kubelet[2789]: E0702 08:07:56.683785 2789 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-163&limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:56.687858 kubelet[2789]: I0702 08:07:56.687518 2789 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:07:56.688317 kubelet[2789]: I0702 08:07:56.688278 2789 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:07:56.691314 kubelet[2789]: I0702 08:07:56.691075 2789 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:07:56.692010 kubelet[2789]: I0702 08:07:56.691961 2789 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:07:56.693294 kubelet[2789]: I0702 08:07:56.692949 2789 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:07:56.693838 kubelet[2789]: I0702 08:07:56.693721 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:07:56.694131 kubelet[2789]: I0702 08:07:56.694091 2789 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:07:56.696034 kubelet[2789]: W0702 08:07:56.694608 2789 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:56.696034 kubelet[2789]: E0702 08:07:56.694705 2789 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:56.696034 kubelet[2789]: E0702 08:07:56.694834 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-163?timeout=10s\": dial tcp 172.31.16.163:6443: connect: connection refused" interval="200ms" Jul 2 08:07:56.698145 kubelet[2789]: E0702 08:07:56.698084 2789 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:07:56.699823 kubelet[2789]: I0702 08:07:56.699781 2789 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:07:56.699988 kubelet[2789]: I0702 08:07:56.699967 2789 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:07:56.700305 kubelet[2789]: I0702 08:07:56.700264 2789 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:07:56.720306 kubelet[2789]: I0702 08:07:56.720191 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:07:56.722680 kubelet[2789]: I0702 08:07:56.722601 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:07:56.722840 kubelet[2789]: I0702 08:07:56.722715 2789 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:07:56.722840 kubelet[2789]: I0702 08:07:56.722758 2789 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:07:56.722960 kubelet[2789]: E0702 08:07:56.722832 2789 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:07:56.735501 kubelet[2789]: W0702 08:07:56.735395 2789 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:56.735806 kubelet[2789]: E0702 08:07:56.735514 2789 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:56.753864 kubelet[2789]: I0702 08:07:56.753708 2789 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:07:56.753864 kubelet[2789]: I0702 08:07:56.753756 2789 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:07:56.753864 kubelet[2789]: I0702 08:07:56.753794 2789 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:07:56.756639 kubelet[2789]: I0702 08:07:56.756574 2789 policy_none.go:49] "None policy: Start" Jul 2 08:07:56.759457 kubelet[2789]: I0702 08:07:56.759416 2789 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:07:56.760878 kubelet[2789]: I0702 08:07:56.759756 2789 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:07:56.777344 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 08:07:56.791733 kubelet[2789]: I0702 08:07:56.791634 2789 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-163" Jul 2 08:07:56.792941 kubelet[2789]: E0702 08:07:56.792802 2789 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.163:6443/api/v1/nodes\": dial tcp 172.31.16.163:6443: connect: connection refused" node="ip-172-31-16-163" Jul 2 08:07:56.800631 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 08:07:56.809978 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 08:07:56.821811 kubelet[2789]: I0702 08:07:56.821004 2789 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:07:56.821811 kubelet[2789]: I0702 08:07:56.821355 2789 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:07:56.821811 kubelet[2789]: I0702 08:07:56.821526 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:07:56.825125 kubelet[2789]: I0702 08:07:56.824677 2789 topology_manager.go:215] "Topology Admit Handler" podUID="b6f081b1d948cf76e6018954f6838a68" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-163" Jul 2 08:07:56.827995 kubelet[2789]: I0702 08:07:56.827916 2789 topology_manager.go:215] "Topology Admit Handler" podUID="ed499bba41676712701fddda4aeddd14" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-163" Jul 2 08:07:56.828531 kubelet[2789]: E0702 08:07:56.828493 2789 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-163\" not found" Jul 2 08:07:56.833489 kubelet[2789]: I0702 08:07:56.832773 2789 topology_manager.go:215] "Topology Admit Handler" podUID="b94ae89291dadbbd9f6c1c67bd07838e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-163" Jul 2 08:07:56.855988 systemd[1]: Created slice kubepods-burstable-podb6f081b1d948cf76e6018954f6838a68.slice - libcontainer container kubepods-burstable-podb6f081b1d948cf76e6018954f6838a68.slice. Jul 2 08:07:56.871128 systemd[1]: Created slice kubepods-burstable-poded499bba41676712701fddda4aeddd14.slice - libcontainer container kubepods-burstable-poded499bba41676712701fddda4aeddd14.slice. Jul 2 08:07:56.884704 systemd[1]: Created slice kubepods-burstable-podb94ae89291dadbbd9f6c1c67bd07838e.slice - libcontainer container kubepods-burstable-podb94ae89291dadbbd9f6c1c67bd07838e.slice. Jul 2 08:07:56.894881 kubelet[2789]: I0702 08:07:56.894191 2789 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed499bba41676712701fddda4aeddd14-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-163\" (UID: \"ed499bba41676712701fddda4aeddd14\") " pod="kube-system/kube-controller-manager-ip-172-31-16-163" Jul 2 08:07:56.894881 kubelet[2789]: I0702 08:07:56.894401 2789 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed499bba41676712701fddda4aeddd14-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-163\" (UID: \"ed499bba41676712701fddda4aeddd14\") " pod="kube-system/kube-controller-manager-ip-172-31-16-163" Jul 2 08:07:56.894881 kubelet[2789]: I0702 08:07:56.894448 2789 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed499bba41676712701fddda4aeddd14-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-163\" (UID: \"ed499bba41676712701fddda4aeddd14\") " pod="kube-system/kube-controller-manager-ip-172-31-16-163" Jul 2 08:07:56.894881 kubelet[2789]: I0702 08:07:56.894495 2789 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b94ae89291dadbbd9f6c1c67bd07838e-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-163\" (UID: \"b94ae89291dadbbd9f6c1c67bd07838e\") " pod="kube-system/kube-scheduler-ip-172-31-16-163" Jul 2 08:07:56.894881 kubelet[2789]: I0702 08:07:56.894538 2789 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6f081b1d948cf76e6018954f6838a68-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-163\" (UID: \"b6f081b1d948cf76e6018954f6838a68\") " pod="kube-system/kube-apiserver-ip-172-31-16-163" Jul 2 08:07:56.895406 kubelet[2789]: I0702 08:07:56.894583 2789 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ed499bba41676712701fddda4aeddd14-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-163\" (UID: \"ed499bba41676712701fddda4aeddd14\") " pod="kube-system/kube-controller-manager-ip-172-31-16-163" Jul 2 08:07:56.895406 kubelet[2789]: I0702 08:07:56.894629 2789 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed499bba41676712701fddda4aeddd14-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-163\" (UID: \"ed499bba41676712701fddda4aeddd14\") " pod="kube-system/kube-controller-manager-ip-172-31-16-163" Jul 2 08:07:56.895406 kubelet[2789]: I0702 08:07:56.894667 2789 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6f081b1d948cf76e6018954f6838a68-ca-certs\") pod \"kube-apiserver-ip-172-31-16-163\" (UID: \"b6f081b1d948cf76e6018954f6838a68\") " pod="kube-system/kube-apiserver-ip-172-31-16-163" Jul 2 08:07:56.895406 kubelet[2789]: I0702 08:07:56.894709 2789 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6f081b1d948cf76e6018954f6838a68-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-163\" (UID: \"b6f081b1d948cf76e6018954f6838a68\") " pod="kube-system/kube-apiserver-ip-172-31-16-163" Jul 2 08:07:56.895696 kubelet[2789]: E0702 08:07:56.895626 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-163?timeout=10s\": dial tcp 172.31.16.163:6443: connect: connection refused" interval="400ms" Jul 2 08:07:56.996389 kubelet[2789]: I0702 08:07:56.996342 2789 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-163" Jul 2 08:07:56.996978 kubelet[2789]: E0702 08:07:56.996844 2789 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.163:6443/api/v1/nodes\": dial tcp 172.31.16.163:6443: connect: connection refused" node="ip-172-31-16-163" Jul 2 08:07:57.167266 containerd[1920]: time="2024-07-02T08:07:57.167158338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-163,Uid:b6f081b1d948cf76e6018954f6838a68,Namespace:kube-system,Attempt:0,}" Jul 2 08:07:57.181356 containerd[1920]: time="2024-07-02T08:07:57.181285090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-163,Uid:ed499bba41676712701fddda4aeddd14,Namespace:kube-system,Attempt:0,}" Jul 2 08:07:57.190994 containerd[1920]: time="2024-07-02T08:07:57.190918945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-163,Uid:b94ae89291dadbbd9f6c1c67bd07838e,Namespace:kube-system,Attempt:0,}" Jul 2 08:07:57.296973 kubelet[2789]: E0702 08:07:57.296900 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-163?timeout=10s\": dial tcp 172.31.16.163:6443: connect: connection refused" interval="800ms" Jul 2 08:07:57.401974 kubelet[2789]: I0702 08:07:57.401909 2789 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-163" Jul 2 08:07:57.402546 kubelet[2789]: E0702 08:07:57.402494 2789 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.163:6443/api/v1/nodes\": dial tcp 172.31.16.163:6443: connect: connection refused" node="ip-172-31-16-163" Jul 2 08:07:57.544928 kubelet[2789]: W0702 08:07:57.544692 2789 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:57.544928 kubelet[2789]: E0702 08:07:57.544779 2789 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:57.671821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount698461449.mount: Deactivated successfully. Jul 2 08:07:57.675076 kubelet[2789]: W0702 08:07:57.674951 2789 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-163&limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:57.675076 kubelet[2789]: E0702 08:07:57.675042 2789 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-163&limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:57.686479 containerd[1920]: time="2024-07-02T08:07:57.686401153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:07:57.693488 containerd[1920]: time="2024-07-02T08:07:57.693419249Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:07:57.695361 containerd[1920]: time="2024-07-02T08:07:57.695269961Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 2 08:07:57.696385 containerd[1920]: time="2024-07-02T08:07:57.696323560Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:07:57.698877 containerd[1920]: time="2024-07-02T08:07:57.698791105Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:07:57.701246 containerd[1920]: time="2024-07-02T08:07:57.701154258Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:07:57.701957 containerd[1920]: time="2024-07-02T08:07:57.701845095Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:07:57.706357 containerd[1920]: time="2024-07-02T08:07:57.706290281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:07:57.711068 containerd[1920]: time="2024-07-02T08:07:57.710738720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.395033ms" Jul 2 08:07:57.715407 containerd[1920]: time="2024-07-02T08:07:57.714985398Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 533.515128ms" Jul 2 08:07:57.728978 containerd[1920]: time="2024-07-02T08:07:57.728894217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 537.836016ms" Jul 2 08:07:57.813898 kubelet[2789]: W0702 08:07:57.813032 2789 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:57.813898 kubelet[2789]: E0702 08:07:57.813133 2789 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:57.943197 containerd[1920]: time="2024-07-02T08:07:57.942770207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:07:57.943197 containerd[1920]: time="2024-07-02T08:07:57.942909573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:07:57.943197 containerd[1920]: time="2024-07-02T08:07:57.942964501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:07:57.943197 containerd[1920]: time="2024-07-02T08:07:57.943004096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:07:57.943567 containerd[1920]: time="2024-07-02T08:07:57.943427680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:07:57.944095 containerd[1920]: time="2024-07-02T08:07:57.943726305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:07:57.944095 containerd[1920]: time="2024-07-02T08:07:57.943880438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:07:57.944095 containerd[1920]: time="2024-07-02T08:07:57.944012060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:07:57.954146 containerd[1920]: time="2024-07-02T08:07:57.953995565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:07:57.954420 containerd[1920]: time="2024-07-02T08:07:57.954302427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:07:57.954538 containerd[1920]: time="2024-07-02T08:07:57.954469634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:07:57.954661 containerd[1920]: time="2024-07-02T08:07:57.954565862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:07:57.997909 systemd[1]: Started cri-containerd-2ce88d67bb8e06e29931c15b8404a860bfdbb55629025bc05ee572309e74a9c7.scope - libcontainer container 2ce88d67bb8e06e29931c15b8404a860bfdbb55629025bc05ee572309e74a9c7. Jul 2 08:07:58.003261 systemd[1]: Started cri-containerd-b6893964c12115620fbdd71be60ad9351835035dc792e9403d816df1c41d171e.scope - libcontainer container b6893964c12115620fbdd71be60ad9351835035dc792e9403d816df1c41d171e. Jul 2 08:07:58.019311 systemd[1]: Started cri-containerd-75def2b9851034298022d67a4b19e81ef9e9cc02e7537466d2dfd15b430a7707.scope - libcontainer container 75def2b9851034298022d67a4b19e81ef9e9cc02e7537466d2dfd15b430a7707. Jul 2 08:07:58.101850 kubelet[2789]: E0702 08:07:58.099206 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-163?timeout=10s\": dial tcp 172.31.16.163:6443: connect: connection refused" interval="1.6s" Jul 2 08:07:58.126720 containerd[1920]: time="2024-07-02T08:07:58.125861625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-163,Uid:ed499bba41676712701fddda4aeddd14,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ce88d67bb8e06e29931c15b8404a860bfdbb55629025bc05ee572309e74a9c7\"" Jul 2 08:07:58.139568 containerd[1920]: time="2024-07-02T08:07:58.139320375Z" level=info msg="CreateContainer within sandbox \"2ce88d67bb8e06e29931c15b8404a860bfdbb55629025bc05ee572309e74a9c7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:07:58.140935 kubelet[2789]: W0702 08:07:58.140764 2789 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:58.140935 kubelet[2789]: E0702 08:07:58.140858 2789 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.163:6443: connect: connection refused Jul 2 08:07:58.150108 containerd[1920]: time="2024-07-02T08:07:58.149683162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-163,Uid:b94ae89291dadbbd9f6c1c67bd07838e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6893964c12115620fbdd71be60ad9351835035dc792e9403d816df1c41d171e\"" Jul 2 08:07:58.154570 containerd[1920]: time="2024-07-02T08:07:58.154320407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-163,Uid:b6f081b1d948cf76e6018954f6838a68,Namespace:kube-system,Attempt:0,} returns sandbox id \"75def2b9851034298022d67a4b19e81ef9e9cc02e7537466d2dfd15b430a7707\"" Jul 2 08:07:58.157174 containerd[1920]: time="2024-07-02T08:07:58.157055830Z" level=info msg="CreateContainer within sandbox \"b6893964c12115620fbdd71be60ad9351835035dc792e9403d816df1c41d171e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:07:58.163477 containerd[1920]: time="2024-07-02T08:07:58.163408673Z" level=info msg="CreateContainer within sandbox \"75def2b9851034298022d67a4b19e81ef9e9cc02e7537466d2dfd15b430a7707\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:07:58.181783 containerd[1920]: time="2024-07-02T08:07:58.181715986Z" level=info msg="CreateContainer within sandbox \"2ce88d67bb8e06e29931c15b8404a860bfdbb55629025bc05ee572309e74a9c7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e4cf735d1dc5ad7555dc0dd9774d508c9fa3e54c084fe66447e6f5072992830e\"" Jul 2 08:07:58.182699 containerd[1920]: time="2024-07-02T08:07:58.182627590Z" level=info msg="StartContainer for \"e4cf735d1dc5ad7555dc0dd9774d508c9fa3e54c084fe66447e6f5072992830e\"" Jul 2 08:07:58.205073 containerd[1920]: time="2024-07-02T08:07:58.204954272Z" level=info msg="CreateContainer within sandbox \"b6893964c12115620fbdd71be60ad9351835035dc792e9403d816df1c41d171e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ce24221db5c3b9072872def05a6ba8da15bca8df57fc29093862925d139b3fc0\"" Jul 2 08:07:58.206213 kubelet[2789]: I0702 08:07:58.205817 2789 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-163" Jul 2 08:07:58.206403 kubelet[2789]: E0702 08:07:58.206361 2789 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.163:6443/api/v1/nodes\": dial tcp 172.31.16.163:6443: connect: connection refused" node="ip-172-31-16-163" Jul 2 08:07:58.206930 containerd[1920]: time="2024-07-02T08:07:58.206856563Z" level=info msg="StartContainer for \"ce24221db5c3b9072872def05a6ba8da15bca8df57fc29093862925d139b3fc0\"" Jul 2 08:07:58.211270 containerd[1920]: time="2024-07-02T08:07:58.209968626Z" level=info msg="CreateContainer within sandbox \"75def2b9851034298022d67a4b19e81ef9e9cc02e7537466d2dfd15b430a7707\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d3a2a34d1a41b8ec7f34033da45c1c92ff482ade7bb497b705505aad7cb66eb2\"" Jul 2 08:07:58.211270 containerd[1920]: time="2024-07-02T08:07:58.210741464Z" level=info msg="StartContainer for \"d3a2a34d1a41b8ec7f34033da45c1c92ff482ade7bb497b705505aad7cb66eb2\"" Jul 2 08:07:58.242605 systemd[1]: Started cri-containerd-e4cf735d1dc5ad7555dc0dd9774d508c9fa3e54c084fe66447e6f5072992830e.scope - libcontainer container e4cf735d1dc5ad7555dc0dd9774d508c9fa3e54c084fe66447e6f5072992830e. Jul 2 08:07:58.285581 systemd[1]: Started cri-containerd-d3a2a34d1a41b8ec7f34033da45c1c92ff482ade7bb497b705505aad7cb66eb2.scope - libcontainer container d3a2a34d1a41b8ec7f34033da45c1c92ff482ade7bb497b705505aad7cb66eb2. Jul 2 08:07:58.316512 systemd[1]: Started cri-containerd-ce24221db5c3b9072872def05a6ba8da15bca8df57fc29093862925d139b3fc0.scope - libcontainer container ce24221db5c3b9072872def05a6ba8da15bca8df57fc29093862925d139b3fc0. Jul 2 08:07:58.401787 containerd[1920]: time="2024-07-02T08:07:58.401709466Z" level=info msg="StartContainer for \"d3a2a34d1a41b8ec7f34033da45c1c92ff482ade7bb497b705505aad7cb66eb2\" returns successfully" Jul 2 08:07:58.401926 containerd[1920]: time="2024-07-02T08:07:58.401887527Z" level=info msg="StartContainer for \"e4cf735d1dc5ad7555dc0dd9774d508c9fa3e54c084fe66447e6f5072992830e\" returns successfully" Jul 2 08:07:58.449498 containerd[1920]: time="2024-07-02T08:07:58.449379745Z" level=info msg="StartContainer for \"ce24221db5c3b9072872def05a6ba8da15bca8df57fc29093862925d139b3fc0\" returns successfully" Jul 2 08:07:59.832877 kubelet[2789]: I0702 08:07:59.832825 2789 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-163" Jul 2 08:08:02.533036 kubelet[2789]: E0702 08:08:02.532987 2789 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-163\" not found" node="ip-172-31-16-163" Jul 2 08:08:02.668540 kubelet[2789]: I0702 08:08:02.668371 2789 apiserver.go:52] "Watching apiserver" Jul 2 08:08:02.692383 kubelet[2789]: I0702 08:08:02.692287 2789 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:08:02.726575 kubelet[2789]: I0702 08:08:02.725286 2789 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-163" Jul 2 08:08:03.498329 update_engine[1912]: I0702 08:08:03.498267 1912 update_attempter.cc:509] Updating boot flags... Jul 2 08:08:03.635292 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3076) Jul 2 08:08:04.095296 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3067) Jul 2 08:08:05.258695 systemd[1]: Reloading requested from client PID 3246 ('systemctl') (unit session-7.scope)... Jul 2 08:08:05.258729 systemd[1]: Reloading... Jul 2 08:08:05.555280 zram_generator::config[3285]: No configuration found. Jul 2 08:08:05.790932 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:08:05.991199 systemd[1]: Reloading finished in 731 ms. Jul 2 08:08:06.077360 kubelet[2789]: I0702 08:08:06.077084 2789 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:08:06.078338 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:08:06.094211 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:08:06.094689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:08:06.094790 systemd[1]: kubelet.service: Consumed 2.660s CPU time, 113.9M memory peak, 0B memory swap peak. Jul 2 08:08:06.105853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:08:06.443568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:08:06.460850 (kubelet)[3344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:08:06.576603 kubelet[3344]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:08:06.579132 kubelet[3344]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:08:06.579132 kubelet[3344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:08:06.579132 kubelet[3344]: I0702 08:08:06.577203 3344 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:08:06.585396 kubelet[3344]: I0702 08:08:06.585334 3344 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:08:06.585396 kubelet[3344]: I0702 08:08:06.585380 3344 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:08:06.585781 kubelet[3344]: I0702 08:08:06.585710 3344 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:08:06.588378 kubelet[3344]: I0702 08:08:06.588327 3344 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:08:06.591129 kubelet[3344]: I0702 08:08:06.590283 3344 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:08:06.606710 kubelet[3344]: I0702 08:08:06.606664 3344 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:08:06.608255 kubelet[3344]: I0702 08:08:06.607033 3344 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:08:06.608255 kubelet[3344]: I0702 08:08:06.607073 3344 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:08:06.608255 kubelet[3344]: I0702 08:08:06.607483 3344 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:08:06.608255 kubelet[3344]: I0702 08:08:06.607503 3344 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:08:06.608255 kubelet[3344]: I0702 08:08:06.607567 3344 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:08:06.608687 kubelet[3344]: I0702 08:08:06.607781 3344 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:08:06.608687 kubelet[3344]: I0702 08:08:06.608657 3344 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:08:06.608805 kubelet[3344]: I0702 08:08:06.608748 3344 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:08:06.609011 kubelet[3344]: I0702 08:08:06.608981 3344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:08:06.611514 kubelet[3344]: I0702 08:08:06.611441 3344 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:08:06.613169 kubelet[3344]: I0702 08:08:06.611858 3344 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:08:06.613169 kubelet[3344]: I0702 08:08:06.612640 3344 server.go:1264] "Started kubelet" Jul 2 08:08:06.619480 kubelet[3344]: I0702 08:08:06.616729 3344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:08:06.636677 kubelet[3344]: I0702 08:08:06.636325 3344 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:08:06.639301 kubelet[3344]: I0702 08:08:06.638036 3344 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:08:06.647264 kubelet[3344]: I0702 08:08:06.646144 3344 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:08:06.649122 kubelet[3344]: I0702 08:08:06.647715 3344 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:08:06.657443 kubelet[3344]: I0702 08:08:06.655963 3344 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:08:06.661156 kubelet[3344]: I0702 08:08:06.661103 3344 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:08:06.661929 kubelet[3344]: I0702 08:08:06.661904 3344 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:08:06.677447 kubelet[3344]: I0702 08:08:06.675428 3344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:08:06.678391 kubelet[3344]: I0702 08:08:06.678343 3344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:08:06.678548 kubelet[3344]: I0702 08:08:06.678416 3344 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:08:06.678548 kubelet[3344]: I0702 08:08:06.678448 3344 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:08:06.678548 kubelet[3344]: E0702 08:08:06.678516 3344 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:08:06.691260 kubelet[3344]: I0702 08:08:06.688060 3344 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:08:06.691637 kubelet[3344]: I0702 08:08:06.691594 3344 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:08:06.702276 kubelet[3344]: I0702 08:08:06.700524 3344 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:08:06.714502 kubelet[3344]: E0702 08:08:06.703113 3344 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:08:06.762658 kubelet[3344]: I0702 08:08:06.762605 3344 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-163" Jul 2 08:08:06.779186 kubelet[3344]: E0702 08:08:06.779141 3344 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 08:08:06.783574 kubelet[3344]: I0702 08:08:06.782488 3344 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-16-163" Jul 2 08:08:06.783867 kubelet[3344]: I0702 08:08:06.783843 3344 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-163" Jul 2 08:08:06.817942 kubelet[3344]: I0702 08:08:06.817910 3344 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:08:06.818242 kubelet[3344]: I0702 08:08:06.818204 3344 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:08:06.818397 kubelet[3344]: I0702 08:08:06.818377 3344 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:08:06.818776 kubelet[3344]: I0702 08:08:06.818749 3344 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:08:06.818917 kubelet[3344]: I0702 08:08:06.818876 3344 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:08:06.819009 kubelet[3344]: I0702 08:08:06.818992 3344 policy_none.go:49] "None policy: Start" Jul 2 08:08:06.820442 kubelet[3344]: I0702 08:08:06.820413 3344 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:08:06.821281 kubelet[3344]: I0702 08:08:06.820653 3344 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:08:06.821281 kubelet[3344]: I0702 08:08:06.820980 3344 state_mem.go:75] "Updated machine memory state" Jul 2 08:08:06.832516 kubelet[3344]: I0702 08:08:06.831368 3344 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:08:06.832516 kubelet[3344]: I0702 08:08:06.831634 3344 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:08:06.833679 kubelet[3344]: I0702 08:08:06.833648 3344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:08:06.983828 kubelet[3344]: I0702 08:08:06.983683 3344 topology_manager.go:215] "Topology Admit Handler" podUID="b6f081b1d948cf76e6018954f6838a68" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-163" Jul 2 08:08:06.985081 kubelet[3344]: I0702 08:08:06.985028 3344 topology_manager.go:215] "Topology Admit Handler" podUID="ed499bba41676712701fddda4aeddd14" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-163" Jul 2 08:08:06.985325 kubelet[3344]: I0702 08:08:06.985297 3344 topology_manager.go:215] "Topology Admit Handler" podUID="b94ae89291dadbbd9f6c1c67bd07838e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-163" Jul 2 08:08:07.064662 kubelet[3344]: I0702 08:08:07.064594 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed499bba41676712701fddda4aeddd14-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-163\" (UID: \"ed499bba41676712701fddda4aeddd14\") " pod="kube-system/kube-controller-manager-ip-172-31-16-163" Jul 2 08:08:07.064942 kubelet[3344]: I0702 08:08:07.064892 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ed499bba41676712701fddda4aeddd14-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-163\" (UID: \"ed499bba41676712701fddda4aeddd14\") " pod="kube-system/kube-controller-manager-ip-172-31-16-163" Jul 2 08:08:07.065275 kubelet[3344]: I0702 08:08:07.065209 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed499bba41676712701fddda4aeddd14-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-163\" (UID: \"ed499bba41676712701fddda4aeddd14\") " pod="kube-system/kube-controller-manager-ip-172-31-16-163" Jul 2 08:08:07.065493 kubelet[3344]: I0702 08:08:07.065456 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed499bba41676712701fddda4aeddd14-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-163\" (UID: \"ed499bba41676712701fddda4aeddd14\") " pod="kube-system/kube-controller-manager-ip-172-31-16-163" Jul 2 08:08:07.065635 kubelet[3344]: I0702 08:08:07.065607 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6f081b1d948cf76e6018954f6838a68-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-163\" (UID: \"b6f081b1d948cf76e6018954f6838a68\") " pod="kube-system/kube-apiserver-ip-172-31-16-163" Jul 2 08:08:07.065788 kubelet[3344]: I0702 08:08:07.065761 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed499bba41676712701fddda4aeddd14-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-163\" (UID: \"ed499bba41676712701fddda4aeddd14\") " pod="kube-system/kube-controller-manager-ip-172-31-16-163" Jul 2 08:08:07.065960 kubelet[3344]: I0702 08:08:07.065937 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b94ae89291dadbbd9f6c1c67bd07838e-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-163\" (UID: \"b94ae89291dadbbd9f6c1c67bd07838e\") " pod="kube-system/kube-scheduler-ip-172-31-16-163" Jul 2 08:08:07.066111 kubelet[3344]: I0702 08:08:07.066088 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6f081b1d948cf76e6018954f6838a68-ca-certs\") pod \"kube-apiserver-ip-172-31-16-163\" (UID: \"b6f081b1d948cf76e6018954f6838a68\") " pod="kube-system/kube-apiserver-ip-172-31-16-163" Jul 2 08:08:07.066286 kubelet[3344]: I0702 08:08:07.066263 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6f081b1d948cf76e6018954f6838a68-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-163\" (UID: \"b6f081b1d948cf76e6018954f6838a68\") " pod="kube-system/kube-apiserver-ip-172-31-16-163" Jul 2 08:08:07.610836 kubelet[3344]: I0702 08:08:07.610745 3344 apiserver.go:52] "Watching apiserver" Jul 2 08:08:07.662582 kubelet[3344]: I0702 08:08:07.662521 3344 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:08:07.918118 kubelet[3344]: I0702 08:08:07.917990 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-163" podStartSLOduration=1.917968824 podStartE2EDuration="1.917968824s" podCreationTimestamp="2024-07-02 08:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:08:07.864765819 +0000 UTC m=+1.392627156" watchObservedRunningTime="2024-07-02 08:08:07.917968824 +0000 UTC m=+1.445830149" Jul 2 08:08:07.939461 kubelet[3344]: I0702 08:08:07.939340 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-163" podStartSLOduration=1.9393193100000001 podStartE2EDuration="1.93931931s" podCreationTimestamp="2024-07-02 08:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:08:07.919202224 +0000 UTC m=+1.447063573" watchObservedRunningTime="2024-07-02 08:08:07.93931931 +0000 UTC m=+1.467180683" Jul 2 08:08:07.970563 kubelet[3344]: I0702 08:08:07.970467 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-163" podStartSLOduration=1.970442476 podStartE2EDuration="1.970442476s" podCreationTimestamp="2024-07-02 08:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:08:07.940950253 +0000 UTC m=+1.468811614" watchObservedRunningTime="2024-07-02 08:08:07.970442476 +0000 UTC m=+1.498303825" Jul 2 08:08:11.408009 sudo[2251]: pam_unix(sudo:session): session closed for user root Jul 2 08:08:11.432667 sshd[2247]: pam_unix(sshd:session): session closed for user core Jul 2 08:08:11.438266 systemd[1]: sshd@6-172.31.16.163:22-139.178.89.65:48460.service: Deactivated successfully. Jul 2 08:08:11.442475 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:08:11.444530 systemd[1]: session-7.scope: Consumed 11.530s CPU time, 137.2M memory peak, 0B memory swap peak. Jul 2 08:08:11.447998 systemd-logind[1911]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:08:11.450352 systemd-logind[1911]: Removed session 7. Jul 2 08:08:20.928981 kubelet[3344]: I0702 08:08:20.928911 3344 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:08:20.930484 kubelet[3344]: I0702 08:08:20.930040 3344 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:08:20.930564 containerd[1920]: time="2024-07-02T08:08:20.929555228Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:08:21.191357 kubelet[3344]: I0702 08:08:21.190122 3344 topology_manager.go:215] "Topology Admit Handler" podUID="2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec" podNamespace="kube-system" podName="kube-proxy-66qwf" Jul 2 08:08:21.215974 systemd[1]: Created slice kubepods-besteffort-pod2e6aa55a_f7ac_4ff7_bc25_94c1d9a164ec.slice - libcontainer container kubepods-besteffort-pod2e6aa55a_f7ac_4ff7_bc25_94c1d9a164ec.slice. Jul 2 08:08:21.258380 kubelet[3344]: I0702 08:08:21.258260 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec-kube-proxy\") pod \"kube-proxy-66qwf\" (UID: \"2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec\") " pod="kube-system/kube-proxy-66qwf" Jul 2 08:08:21.258380 kubelet[3344]: I0702 08:08:21.258354 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec-lib-modules\") pod \"kube-proxy-66qwf\" (UID: \"2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec\") " pod="kube-system/kube-proxy-66qwf" Jul 2 08:08:21.258709 kubelet[3344]: I0702 08:08:21.258396 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2dd\" (UniqueName: \"kubernetes.io/projected/2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec-kube-api-access-ld2dd\") pod \"kube-proxy-66qwf\" (UID: \"2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec\") " pod="kube-system/kube-proxy-66qwf" Jul 2 08:08:21.258709 kubelet[3344]: I0702 08:08:21.258445 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec-xtables-lock\") pod \"kube-proxy-66qwf\" (UID: \"2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec\") " pod="kube-system/kube-proxy-66qwf" Jul 2 08:08:21.370443 kubelet[3344]: E0702 08:08:21.370384 3344 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 08:08:21.370443 kubelet[3344]: E0702 08:08:21.370434 3344 projected.go:200] Error preparing data for projected volume kube-api-access-ld2dd for pod kube-system/kube-proxy-66qwf: configmap "kube-root-ca.crt" not found Jul 2 08:08:21.370786 kubelet[3344]: E0702 08:08:21.370584 3344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec-kube-api-access-ld2dd podName:2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec nodeName:}" failed. No retries permitted until 2024-07-02 08:08:21.870542791 +0000 UTC m=+15.398404128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ld2dd" (UniqueName: "kubernetes.io/projected/2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec-kube-api-access-ld2dd") pod "kube-proxy-66qwf" (UID: "2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec") : configmap "kube-root-ca.crt" not found Jul 2 08:08:21.735930 kubelet[3344]: I0702 08:08:21.734287 3344 topology_manager.go:215] "Topology Admit Handler" podUID="2336b5d9-d522-40db-b312-baf8ac447b90" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-4t7w4" Jul 2 08:08:21.755074 systemd[1]: Created slice kubepods-besteffort-pod2336b5d9_d522_40db_b312_baf8ac447b90.slice - libcontainer container kubepods-besteffort-pod2336b5d9_d522_40db_b312_baf8ac447b90.slice. Jul 2 08:08:21.764369 kubelet[3344]: I0702 08:08:21.763533 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgmb2\" (UniqueName: \"kubernetes.io/projected/2336b5d9-d522-40db-b312-baf8ac447b90-kube-api-access-fgmb2\") pod \"tigera-operator-76ff79f7fd-4t7w4\" (UID: \"2336b5d9-d522-40db-b312-baf8ac447b90\") " pod="tigera-operator/tigera-operator-76ff79f7fd-4t7w4" Jul 2 08:08:21.764369 kubelet[3344]: I0702 08:08:21.763607 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2336b5d9-d522-40db-b312-baf8ac447b90-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-4t7w4\" (UID: \"2336b5d9-d522-40db-b312-baf8ac447b90\") " pod="tigera-operator/tigera-operator-76ff79f7fd-4t7w4" Jul 2 08:08:22.065731 containerd[1920]: time="2024-07-02T08:08:22.065563352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-4t7w4,Uid:2336b5d9-d522-40db-b312-baf8ac447b90,Namespace:tigera-operator,Attempt:0,}" Jul 2 08:08:22.113734 containerd[1920]: time="2024-07-02T08:08:22.112907008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:22.113734 containerd[1920]: time="2024-07-02T08:08:22.113013166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:22.113734 containerd[1920]: time="2024-07-02T08:08:22.113057948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:22.113734 containerd[1920]: time="2024-07-02T08:08:22.113093029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:22.132283 containerd[1920]: time="2024-07-02T08:08:22.131641579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66qwf,Uid:2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec,Namespace:kube-system,Attempt:0,}" Jul 2 08:08:22.164602 systemd[1]: Started cri-containerd-4b895066a66a252d029258b28125ac41107b181fa1e0d5123c883c0da08104ef.scope - libcontainer container 4b895066a66a252d029258b28125ac41107b181fa1e0d5123c883c0da08104ef. Jul 2 08:08:22.199486 containerd[1920]: time="2024-07-02T08:08:22.197240203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:22.199486 containerd[1920]: time="2024-07-02T08:08:22.199367318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:22.199784 containerd[1920]: time="2024-07-02T08:08:22.199495626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:22.199981 containerd[1920]: time="2024-07-02T08:08:22.199698359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:22.242716 systemd[1]: Started cri-containerd-cb45eae429ec03740fb987dfe1f8db8d096451c9fa0a72b973397fe1d5721736.scope - libcontainer container cb45eae429ec03740fb987dfe1f8db8d096451c9fa0a72b973397fe1d5721736. Jul 2 08:08:22.265261 containerd[1920]: time="2024-07-02T08:08:22.263882604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-4t7w4,Uid:2336b5d9-d522-40db-b312-baf8ac447b90,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4b895066a66a252d029258b28125ac41107b181fa1e0d5123c883c0da08104ef\"" Jul 2 08:08:22.272194 containerd[1920]: time="2024-07-02T08:08:22.271148239Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 08:08:22.316580 containerd[1920]: time="2024-07-02T08:08:22.316339555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66qwf,Uid:2e6aa55a-f7ac-4ff7-bc25-94c1d9a164ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb45eae429ec03740fb987dfe1f8db8d096451c9fa0a72b973397fe1d5721736\"" Jul 2 08:08:22.324311 containerd[1920]: time="2024-07-02T08:08:22.324023720Z" level=info msg="CreateContainer within sandbox \"cb45eae429ec03740fb987dfe1f8db8d096451c9fa0a72b973397fe1d5721736\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:08:22.354420 containerd[1920]: time="2024-07-02T08:08:22.354311928Z" level=info msg="CreateContainer within sandbox \"cb45eae429ec03740fb987dfe1f8db8d096451c9fa0a72b973397fe1d5721736\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f2f9d994d1e609767ef590db57f2a3f4a3c2134edcb249f7727879f0efaaf950\"" Jul 2 08:08:22.357185 containerd[1920]: time="2024-07-02T08:08:22.356918635Z" level=info msg="StartContainer for \"f2f9d994d1e609767ef590db57f2a3f4a3c2134edcb249f7727879f0efaaf950\"" Jul 2 08:08:22.418500 systemd[1]: Started cri-containerd-f2f9d994d1e609767ef590db57f2a3f4a3c2134edcb249f7727879f0efaaf950.scope - libcontainer container f2f9d994d1e609767ef590db57f2a3f4a3c2134edcb249f7727879f0efaaf950. Jul 2 08:08:22.481170 containerd[1920]: time="2024-07-02T08:08:22.481062465Z" level=info msg="StartContainer for \"f2f9d994d1e609767ef590db57f2a3f4a3c2134edcb249f7727879f0efaaf950\" returns successfully" Jul 2 08:08:22.892693 systemd[1]: run-containerd-runc-k8s.io-4b895066a66a252d029258b28125ac41107b181fa1e0d5123c883c0da08104ef-runc.g1Y06y.mount: Deactivated successfully. Jul 2 08:08:23.775738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886611765.mount: Deactivated successfully. Jul 2 08:08:24.493385 containerd[1920]: time="2024-07-02T08:08:24.493312501Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:24.495903 containerd[1920]: time="2024-07-02T08:08:24.495802966Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473586" Jul 2 08:08:24.497935 containerd[1920]: time="2024-07-02T08:08:24.497791892Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:24.503704 containerd[1920]: time="2024-07-02T08:08:24.503572060Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:24.505882 containerd[1920]: time="2024-07-02T08:08:24.505584014Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.234365023s" Jul 2 08:08:24.505882 containerd[1920]: time="2024-07-02T08:08:24.505661957Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 08:08:24.511299 containerd[1920]: time="2024-07-02T08:08:24.511120389Z" level=info msg="CreateContainer within sandbox \"4b895066a66a252d029258b28125ac41107b181fa1e0d5123c883c0da08104ef\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 08:08:24.542336 containerd[1920]: time="2024-07-02T08:08:24.542203767Z" level=info msg="CreateContainer within sandbox \"4b895066a66a252d029258b28125ac41107b181fa1e0d5123c883c0da08104ef\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9a798a73736f6a022717d7a835de08a965ba01bca2bac9a21e09b6c55ee84018\"" Jul 2 08:08:24.543690 containerd[1920]: time="2024-07-02T08:08:24.543601013Z" level=info msg="StartContainer for \"9a798a73736f6a022717d7a835de08a965ba01bca2bac9a21e09b6c55ee84018\"" Jul 2 08:08:24.612621 systemd[1]: Started cri-containerd-9a798a73736f6a022717d7a835de08a965ba01bca2bac9a21e09b6c55ee84018.scope - libcontainer container 9a798a73736f6a022717d7a835de08a965ba01bca2bac9a21e09b6c55ee84018. Jul 2 08:08:24.664604 containerd[1920]: time="2024-07-02T08:08:24.664542566Z" level=info msg="StartContainer for \"9a798a73736f6a022717d7a835de08a965ba01bca2bac9a21e09b6c55ee84018\" returns successfully" Jul 2 08:08:24.828853 kubelet[3344]: I0702 08:08:24.828539 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-66qwf" podStartSLOduration=3.8284753609999997 podStartE2EDuration="3.828475361s" podCreationTimestamp="2024-07-02 08:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:08:22.816374751 +0000 UTC m=+16.344236076" watchObservedRunningTime="2024-07-02 08:08:24.828475361 +0000 UTC m=+18.356336698" Jul 2 08:08:30.692849 kubelet[3344]: I0702 08:08:30.692750 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-4t7w4" podStartSLOduration=7.453574325 podStartE2EDuration="9.692727438s" podCreationTimestamp="2024-07-02 08:08:21 +0000 UTC" firstStartedPulling="2024-07-02 08:08:22.26867032 +0000 UTC m=+15.796531669" lastFinishedPulling="2024-07-02 08:08:24.507823457 +0000 UTC m=+18.035684782" observedRunningTime="2024-07-02 08:08:24.831458564 +0000 UTC m=+18.359319925" watchObservedRunningTime="2024-07-02 08:08:30.692727438 +0000 UTC m=+24.220588776" Jul 2 08:08:30.694009 kubelet[3344]: I0702 08:08:30.692968 3344 topology_manager.go:215] "Topology Admit Handler" podUID="d74a9fcb-547a-42fe-9f3c-2caf90fb3b94" podNamespace="calico-system" podName="calico-typha-749b68d9f5-v9674" Jul 2 08:08:30.717877 systemd[1]: Created slice kubepods-besteffort-podd74a9fcb_547a_42fe_9f3c_2caf90fb3b94.slice - libcontainer container kubepods-besteffort-podd74a9fcb_547a_42fe_9f3c_2caf90fb3b94.slice. Jul 2 08:08:30.725626 kubelet[3344]: I0702 08:08:30.725474 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6ks9\" (UniqueName: \"kubernetes.io/projected/d74a9fcb-547a-42fe-9f3c-2caf90fb3b94-kube-api-access-n6ks9\") pod \"calico-typha-749b68d9f5-v9674\" (UID: \"d74a9fcb-547a-42fe-9f3c-2caf90fb3b94\") " pod="calico-system/calico-typha-749b68d9f5-v9674" Jul 2 08:08:30.727313 kubelet[3344]: I0702 08:08:30.727215 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d74a9fcb-547a-42fe-9f3c-2caf90fb3b94-tigera-ca-bundle\") pod \"calico-typha-749b68d9f5-v9674\" (UID: \"d74a9fcb-547a-42fe-9f3c-2caf90fb3b94\") " pod="calico-system/calico-typha-749b68d9f5-v9674" Jul 2 08:08:30.727486 kubelet[3344]: I0702 08:08:30.727349 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d74a9fcb-547a-42fe-9f3c-2caf90fb3b94-typha-certs\") pod \"calico-typha-749b68d9f5-v9674\" (UID: \"d74a9fcb-547a-42fe-9f3c-2caf90fb3b94\") " pod="calico-system/calico-typha-749b68d9f5-v9674" Jul 2 08:08:30.731187 kubelet[3344]: W0702 08:08:30.731071 3344 reflector.go:547] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-16-163" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-16-163' and this object Jul 2 08:08:30.731187 kubelet[3344]: W0702 08:08:30.731092 3344 reflector.go:547] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-16-163" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-16-163' and this object Jul 2 08:08:30.731738 kubelet[3344]: E0702 08:08:30.731286 3344 reflector.go:150] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-16-163" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-16-163' and this object Jul 2 08:08:30.731738 kubelet[3344]: E0702 08:08:30.731216 3344 reflector.go:150] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-16-163" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-16-163' and this object Jul 2 08:08:30.735549 kubelet[3344]: W0702 08:08:30.735463 3344 reflector.go:547] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-163" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-16-163' and this object Jul 2 08:08:30.735549 kubelet[3344]: E0702 08:08:30.735551 3344 reflector.go:150] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-163" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-16-163' and this object Jul 2 08:08:30.967665 kubelet[3344]: I0702 08:08:30.967483 3344 topology_manager.go:215] "Topology Admit Handler" podUID="a55bc379-3415-4676-96e8-a4ba7ca4606c" podNamespace="calico-system" podName="calico-node-hsr9m" Jul 2 08:08:30.988757 systemd[1]: Created slice kubepods-besteffort-poda55bc379_3415_4676_96e8_a4ba7ca4606c.slice - libcontainer container kubepods-besteffort-poda55bc379_3415_4676_96e8_a4ba7ca4606c.slice. Jul 2 08:08:31.029005 kubelet[3344]: I0702 08:08:31.028948 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc379-3415-4676-96e8-a4ba7ca4606c-cni-bin-dir\") pod \"calico-node-hsr9m\" (UID: \"a55bc379-3415-4676-96e8-a4ba7ca4606c\") " pod="calico-system/calico-node-hsr9m" Jul 2 08:08:31.029171 kubelet[3344]: I0702 08:08:31.029022 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a55bc379-3415-4676-96e8-a4ba7ca4606c-policysync\") pod \"calico-node-hsr9m\" (UID: \"a55bc379-3415-4676-96e8-a4ba7ca4606c\") " pod="calico-system/calico-node-hsr9m" Jul 2 08:08:31.029171 kubelet[3344]: I0702 08:08:31.029067 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a55bc379-3415-4676-96e8-a4ba7ca4606c-node-certs\") pod \"calico-node-hsr9m\" (UID: \"a55bc379-3415-4676-96e8-a4ba7ca4606c\") " pod="calico-system/calico-node-hsr9m" Jul 2 08:08:31.029171 kubelet[3344]: I0702 08:08:31.029121 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a55bc379-3415-4676-96e8-a4ba7ca4606c-flexvol-driver-host\") pod \"calico-node-hsr9m\" (UID: \"a55bc379-3415-4676-96e8-a4ba7ca4606c\") " pod="calico-system/calico-node-hsr9m" Jul 2 08:08:31.029171 kubelet[3344]: I0702 08:08:31.029161 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmmcm\" (UniqueName: \"kubernetes.io/projected/a55bc379-3415-4676-96e8-a4ba7ca4606c-kube-api-access-dmmcm\") pod \"calico-node-hsr9m\" (UID: \"a55bc379-3415-4676-96e8-a4ba7ca4606c\") " pod="calico-system/calico-node-hsr9m" Jul 2 08:08:31.029475 kubelet[3344]: I0702 08:08:31.029206 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a55bc379-3415-4676-96e8-a4ba7ca4606c-xtables-lock\") pod \"calico-node-hsr9m\" (UID: \"a55bc379-3415-4676-96e8-a4ba7ca4606c\") " pod="calico-system/calico-node-hsr9m" Jul 2 08:08:31.029475 kubelet[3344]: I0702 08:08:31.029275 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a55bc379-3415-4676-96e8-a4ba7ca4606c-tigera-ca-bundle\") pod \"calico-node-hsr9m\" (UID: \"a55bc379-3415-4676-96e8-a4ba7ca4606c\") " pod="calico-system/calico-node-hsr9m" Jul 2 08:08:31.029475 kubelet[3344]: I0702 08:08:31.029317 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a55bc379-3415-4676-96e8-a4ba7ca4606c-lib-modules\") pod \"calico-node-hsr9m\" (UID: \"a55bc379-3415-4676-96e8-a4ba7ca4606c\") " pod="calico-system/calico-node-hsr9m" Jul 2 08:08:31.029475 kubelet[3344]: I0702 08:08:31.029353 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc379-3415-4676-96e8-a4ba7ca4606c-cni-net-dir\") pod \"calico-node-hsr9m\" (UID: \"a55bc379-3415-4676-96e8-a4ba7ca4606c\") " pod="calico-system/calico-node-hsr9m" Jul 2 08:08:31.029475 kubelet[3344]: I0702 08:08:31.029399 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc379-3415-4676-96e8-a4ba7ca4606c-cni-log-dir\") pod \"calico-node-hsr9m\" (UID: \"a55bc379-3415-4676-96e8-a4ba7ca4606c\") " pod="calico-system/calico-node-hsr9m" Jul 2 08:08:31.029739 kubelet[3344]: I0702 08:08:31.029437 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a55bc379-3415-4676-96e8-a4ba7ca4606c-var-run-calico\") pod \"calico-node-hsr9m\" (UID: \"a55bc379-3415-4676-96e8-a4ba7ca4606c\") " pod="calico-system/calico-node-hsr9m" Jul 2 08:08:31.029739 kubelet[3344]: I0702 08:08:31.029470 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a55bc379-3415-4676-96e8-a4ba7ca4606c-var-lib-calico\") pod \"calico-node-hsr9m\" (UID: \"a55bc379-3415-4676-96e8-a4ba7ca4606c\") " pod="calico-system/calico-node-hsr9m" Jul 2 08:08:31.087669 kubelet[3344]: I0702 08:08:31.087581 3344 topology_manager.go:215] "Topology Admit Handler" podUID="6f6df60c-e3f9-43ed-b240-8e9935f2d2eb" podNamespace="calico-system" podName="csi-node-driver-x68xj" Jul 2 08:08:31.088578 kubelet[3344]: E0702 08:08:31.088115 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x68xj" podUID="6f6df60c-e3f9-43ed-b240-8e9935f2d2eb" Jul 2 08:08:31.132271 kubelet[3344]: I0702 08:08:31.130000 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6f6df60c-e3f9-43ed-b240-8e9935f2d2eb-socket-dir\") pod \"csi-node-driver-x68xj\" (UID: \"6f6df60c-e3f9-43ed-b240-8e9935f2d2eb\") " pod="calico-system/csi-node-driver-x68xj" Jul 2 08:08:31.132271 kubelet[3344]: I0702 08:08:31.130115 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6f6df60c-e3f9-43ed-b240-8e9935f2d2eb-varrun\") pod \"csi-node-driver-x68xj\" (UID: \"6f6df60c-e3f9-43ed-b240-8e9935f2d2eb\") " pod="calico-system/csi-node-driver-x68xj" Jul 2 08:08:31.132271 kubelet[3344]: I0702 08:08:31.130169 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6f6df60c-e3f9-43ed-b240-8e9935f2d2eb-registration-dir\") pod \"csi-node-driver-x68xj\" (UID: \"6f6df60c-e3f9-43ed-b240-8e9935f2d2eb\") " pod="calico-system/csi-node-driver-x68xj" Jul 2 08:08:31.132271 kubelet[3344]: I0702 08:08:31.130324 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf4wd\" (UniqueName: \"kubernetes.io/projected/6f6df60c-e3f9-43ed-b240-8e9935f2d2eb-kube-api-access-rf4wd\") pod \"csi-node-driver-x68xj\" (UID: \"6f6df60c-e3f9-43ed-b240-8e9935f2d2eb\") " pod="calico-system/csi-node-driver-x68xj" Jul 2 08:08:31.132271 kubelet[3344]: I0702 08:08:31.130462 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f6df60c-e3f9-43ed-b240-8e9935f2d2eb-kubelet-dir\") pod \"csi-node-driver-x68xj\" (UID: \"6f6df60c-e3f9-43ed-b240-8e9935f2d2eb\") " pod="calico-system/csi-node-driver-x68xj" Jul 2 08:08:31.136396 kubelet[3344]: E0702 08:08:31.136326 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.136762 kubelet[3344]: W0702 08:08:31.136706 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.136762 kubelet[3344]: E0702 08:08:31.136768 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.138410 kubelet[3344]: E0702 08:08:31.138360 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.138410 kubelet[3344]: W0702 08:08:31.138400 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.139130 kubelet[3344]: E0702 08:08:31.138444 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.139130 kubelet[3344]: E0702 08:08:31.138928 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.139130 kubelet[3344]: W0702 08:08:31.138955 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.139130 kubelet[3344]: E0702 08:08:31.139037 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.141495 kubelet[3344]: E0702 08:08:31.141430 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.141495 kubelet[3344]: W0702 08:08:31.141474 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.141495 kubelet[3344]: E0702 08:08:31.141548 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.142392 kubelet[3344]: E0702 08:08:31.141899 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.142392 kubelet[3344]: W0702 08:08:31.141919 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.142392 kubelet[3344]: E0702 08:08:31.142175 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.144577 kubelet[3344]: E0702 08:08:31.144523 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.144577 kubelet[3344]: W0702 08:08:31.144564 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.144883 kubelet[3344]: E0702 08:08:31.144641 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.146261 kubelet[3344]: E0702 08:08:31.145008 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.146261 kubelet[3344]: W0702 08:08:31.145044 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.146261 kubelet[3344]: E0702 08:08:31.146169 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.147419 kubelet[3344]: E0702 08:08:31.147361 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.147419 kubelet[3344]: W0702 08:08:31.147405 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.147929 kubelet[3344]: E0702 08:08:31.147496 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.153364 kubelet[3344]: E0702 08:08:31.153062 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.153364 kubelet[3344]: W0702 08:08:31.153099 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.161325 kubelet[3344]: E0702 08:08:31.160963 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.161325 kubelet[3344]: W0702 08:08:31.160998 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.161734 kubelet[3344]: E0702 08:08:31.161704 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.165307 kubelet[3344]: E0702 08:08:31.163606 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.166534 kubelet[3344]: E0702 08:08:31.163623 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.166965 kubelet[3344]: W0702 08:08:31.166427 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.167172 kubelet[3344]: E0702 08:08:31.167138 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.171520 kubelet[3344]: E0702 08:08:31.171182 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.171520 kubelet[3344]: W0702 08:08:31.171357 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.172358 kubelet[3344]: E0702 08:08:31.172289 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.173805 kubelet[3344]: E0702 08:08:31.173544 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.173805 kubelet[3344]: W0702 08:08:31.173581 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.174127 kubelet[3344]: E0702 08:08:31.174081 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.174487 kubelet[3344]: E0702 08:08:31.174410 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.174811 kubelet[3344]: W0702 08:08:31.174461 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.174811 kubelet[3344]: E0702 08:08:31.174731 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.175796 kubelet[3344]: E0702 08:08:31.175651 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.175796 kubelet[3344]: W0702 08:08:31.175679 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.176488 kubelet[3344]: E0702 08:08:31.176283 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.176488 kubelet[3344]: E0702 08:08:31.176435 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.176488 kubelet[3344]: W0702 08:08:31.176455 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.177047 kubelet[3344]: E0702 08:08:31.176842 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.177366 kubelet[3344]: E0702 08:08:31.177335 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.177632 kubelet[3344]: W0702 08:08:31.177501 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.177632 kubelet[3344]: E0702 08:08:31.177591 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.178616 kubelet[3344]: E0702 08:08:31.178583 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.179347 kubelet[3344]: W0702 08:08:31.178855 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.179685 kubelet[3344]: E0702 08:08:31.179536 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.179966 kubelet[3344]: E0702 08:08:31.179938 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.180365 kubelet[3344]: W0702 08:08:31.180079 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.180365 kubelet[3344]: E0702 08:08:31.180170 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.180808 kubelet[3344]: E0702 08:08:31.180688 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.180808 kubelet[3344]: W0702 08:08:31.180714 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.180808 kubelet[3344]: E0702 08:08:31.180770 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.181472 kubelet[3344]: E0702 08:08:31.181346 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.181472 kubelet[3344]: W0702 08:08:31.181373 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.181472 kubelet[3344]: E0702 08:08:31.181415 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.182530 kubelet[3344]: E0702 08:08:31.182491 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.182530 kubelet[3344]: W0702 08:08:31.182528 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.182726 kubelet[3344]: E0702 08:08:31.182573 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.184625 kubelet[3344]: E0702 08:08:31.184586 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.185102 kubelet[3344]: W0702 08:08:31.184997 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.185102 kubelet[3344]: E0702 08:08:31.185045 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.233562 kubelet[3344]: E0702 08:08:31.233411 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.233562 kubelet[3344]: W0702 08:08:31.233455 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.233562 kubelet[3344]: E0702 08:08:31.233493 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.236670 kubelet[3344]: E0702 08:08:31.236601 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.236670 kubelet[3344]: W0702 08:08:31.236658 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.237134 kubelet[3344]: E0702 08:08:31.236693 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.238825 kubelet[3344]: E0702 08:08:31.238756 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.238825 kubelet[3344]: W0702 08:08:31.238813 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.239092 kubelet[3344]: E0702 08:08:31.238865 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.240104 kubelet[3344]: E0702 08:08:31.240049 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.240104 kubelet[3344]: W0702 08:08:31.240089 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.241502 kubelet[3344]: E0702 08:08:31.240158 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.242114 kubelet[3344]: E0702 08:08:31.241918 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.242114 kubelet[3344]: W0702 08:08:31.241952 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.242114 kubelet[3344]: E0702 08:08:31.242006 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.242899 kubelet[3344]: E0702 08:08:31.242648 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.242899 kubelet[3344]: W0702 08:08:31.242676 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.243316 kubelet[3344]: E0702 08:08:31.243184 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.243623 kubelet[3344]: E0702 08:08:31.243594 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.243847 kubelet[3344]: W0702 08:08:31.243724 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.245342 kubelet[3344]: E0702 08:08:31.244899 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.245342 kubelet[3344]: E0702 08:08:31.245090 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.245342 kubelet[3344]: W0702 08:08:31.245113 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.245704 kubelet[3344]: E0702 08:08:31.245673 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.245960 kubelet[3344]: E0702 08:08:31.245935 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.246207 kubelet[3344]: W0702 08:08:31.246074 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.246691 kubelet[3344]: E0702 08:08:31.246629 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.246821 kubelet[3344]: E0702 08:08:31.246634 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.246821 kubelet[3344]: W0702 08:08:31.246740 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.247193 kubelet[3344]: E0702 08:08:31.246874 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.249073 kubelet[3344]: E0702 08:08:31.249018 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.249073 kubelet[3344]: W0702 08:08:31.249060 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.249510 kubelet[3344]: E0702 08:08:31.249213 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.250453 kubelet[3344]: E0702 08:08:31.250401 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.250453 kubelet[3344]: W0702 08:08:31.250441 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.250800 kubelet[3344]: E0702 08:08:31.250626 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.250963 kubelet[3344]: E0702 08:08:31.250923 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.250963 kubelet[3344]: W0702 08:08:31.250951 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.251188 kubelet[3344]: E0702 08:08:31.251067 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.252459 kubelet[3344]: E0702 08:08:31.252405 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.252459 kubelet[3344]: W0702 08:08:31.252446 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.252799 kubelet[3344]: E0702 08:08:31.252594 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.253886 kubelet[3344]: E0702 08:08:31.253833 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.253886 kubelet[3344]: W0702 08:08:31.253875 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.254338 kubelet[3344]: E0702 08:08:31.253941 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.256441 kubelet[3344]: E0702 08:08:31.256386 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.256441 kubelet[3344]: W0702 08:08:31.256427 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.256765 kubelet[3344]: E0702 08:08:31.256582 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.257890 kubelet[3344]: E0702 08:08:31.257837 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.257890 kubelet[3344]: W0702 08:08:31.257878 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.257890 kubelet[3344]: E0702 08:08:31.257951 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.259687 kubelet[3344]: E0702 08:08:31.259371 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.259687 kubelet[3344]: W0702 08:08:31.259411 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.259687 kubelet[3344]: E0702 08:08:31.259453 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.261416 kubelet[3344]: E0702 08:08:31.261364 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.261416 kubelet[3344]: W0702 08:08:31.261405 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.261772 kubelet[3344]: E0702 08:08:31.261559 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.263446 kubelet[3344]: E0702 08:08:31.263382 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.263446 kubelet[3344]: W0702 08:08:31.263441 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.263775 kubelet[3344]: E0702 08:08:31.263596 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.265447 kubelet[3344]: E0702 08:08:31.265391 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.265447 kubelet[3344]: W0702 08:08:31.265434 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.265841 kubelet[3344]: E0702 08:08:31.265622 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.266911 kubelet[3344]: E0702 08:08:31.266857 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.266911 kubelet[3344]: W0702 08:08:31.266900 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.267320 kubelet[3344]: E0702 08:08:31.267048 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.269468 kubelet[3344]: E0702 08:08:31.269411 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.269468 kubelet[3344]: W0702 08:08:31.269454 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.270624 kubelet[3344]: E0702 08:08:31.269520 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.271389 kubelet[3344]: E0702 08:08:31.271057 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.271389 kubelet[3344]: W0702 08:08:31.271094 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.271389 kubelet[3344]: E0702 08:08:31.271147 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.273478 kubelet[3344]: E0702 08:08:31.273390 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.273478 kubelet[3344]: W0702 08:08:31.273432 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.274410 kubelet[3344]: E0702 08:08:31.274153 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.275265 kubelet[3344]: E0702 08:08:31.274743 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.275265 kubelet[3344]: W0702 08:08:31.274772 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.275265 kubelet[3344]: E0702 08:08:31.274835 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.279268 kubelet[3344]: E0702 08:08:31.277419 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.279268 kubelet[3344]: W0702 08:08:31.277458 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.279268 kubelet[3344]: E0702 08:08:31.277493 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.281534 kubelet[3344]: E0702 08:08:31.281207 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.281534 kubelet[3344]: W0702 08:08:31.281269 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.281534 kubelet[3344]: E0702 08:08:31.281303 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.285438 kubelet[3344]: E0702 08:08:31.285115 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.285438 kubelet[3344]: W0702 08:08:31.285150 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.285901 kubelet[3344]: E0702 08:08:31.285868 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.286245 kubelet[3344]: W0702 08:08:31.286190 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.286695 kubelet[3344]: E0702 08:08:31.286653 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.287424 kubelet[3344]: E0702 08:08:31.286131 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.370932 kubelet[3344]: E0702 08:08:31.370891 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.371327 kubelet[3344]: W0702 08:08:31.371100 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.371327 kubelet[3344]: E0702 08:08:31.371147 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.372173 kubelet[3344]: E0702 08:08:31.371966 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.372173 kubelet[3344]: W0702 08:08:31.372013 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.372173 kubelet[3344]: E0702 08:08:31.372049 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.374620 kubelet[3344]: E0702 08:08:31.374098 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.374620 kubelet[3344]: W0702 08:08:31.374134 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.374620 kubelet[3344]: E0702 08:08:31.374167 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.375311 kubelet[3344]: E0702 08:08:31.375258 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.375311 kubelet[3344]: W0702 08:08:31.375301 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.375539 kubelet[3344]: E0702 08:08:31.375339 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.376166 kubelet[3344]: E0702 08:08:31.375932 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.376166 kubelet[3344]: W0702 08:08:31.375965 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.376166 kubelet[3344]: E0702 08:08:31.375995 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.376734 kubelet[3344]: E0702 08:08:31.376688 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.376734 kubelet[3344]: W0702 08:08:31.376726 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.376912 kubelet[3344]: E0702 08:08:31.376771 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.478755 kubelet[3344]: E0702 08:08:31.478597 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.479282 kubelet[3344]: W0702 08:08:31.478993 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.479282 kubelet[3344]: E0702 08:08:31.479045 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.480266 kubelet[3344]: E0702 08:08:31.480210 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.481061 kubelet[3344]: W0702 08:08:31.480484 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.481061 kubelet[3344]: E0702 08:08:31.480538 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.481658 kubelet[3344]: E0702 08:08:31.481620 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.481763 kubelet[3344]: W0702 08:08:31.481657 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.481763 kubelet[3344]: E0702 08:08:31.481692 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.482726 kubelet[3344]: E0702 08:08:31.482510 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.482726 kubelet[3344]: W0702 08:08:31.482570 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.482726 kubelet[3344]: E0702 08:08:31.482619 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.483415 kubelet[3344]: E0702 08:08:31.483167 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.483415 kubelet[3344]: W0702 08:08:31.483188 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.483415 kubelet[3344]: E0702 08:08:31.483213 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.484295 kubelet[3344]: E0702 08:08:31.483614 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.484295 kubelet[3344]: W0702 08:08:31.483650 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.484295 kubelet[3344]: E0702 08:08:31.483672 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.585622 kubelet[3344]: E0702 08:08:31.585574 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.585622 kubelet[3344]: W0702 08:08:31.585612 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.585821 kubelet[3344]: E0702 08:08:31.585647 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.586575 kubelet[3344]: E0702 08:08:31.586533 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.586575 kubelet[3344]: W0702 08:08:31.586569 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.587093 kubelet[3344]: E0702 08:08:31.586603 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.587210 kubelet[3344]: E0702 08:08:31.587125 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.587210 kubelet[3344]: W0702 08:08:31.587149 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.587210 kubelet[3344]: E0702 08:08:31.587176 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.587784 kubelet[3344]: E0702 08:08:31.587595 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.587784 kubelet[3344]: W0702 08:08:31.587625 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.587784 kubelet[3344]: E0702 08:08:31.587652 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.588897 kubelet[3344]: E0702 08:08:31.588021 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.588897 kubelet[3344]: W0702 08:08:31.588045 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.588897 kubelet[3344]: E0702 08:08:31.588072 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.588897 kubelet[3344]: E0702 08:08:31.588521 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.588897 kubelet[3344]: W0702 08:08:31.588549 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.588897 kubelet[3344]: E0702 08:08:31.588579 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.689643 kubelet[3344]: E0702 08:08:31.689589 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.689643 kubelet[3344]: W0702 08:08:31.689629 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.689989 kubelet[3344]: E0702 08:08:31.689662 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.690515 kubelet[3344]: E0702 08:08:31.690486 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.690585 kubelet[3344]: W0702 08:08:31.690515 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.690585 kubelet[3344]: E0702 08:08:31.690540 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.691748 kubelet[3344]: E0702 08:08:31.691697 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.691748 kubelet[3344]: W0702 08:08:31.691728 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.691887 kubelet[3344]: E0702 08:08:31.691754 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.692608 kubelet[3344]: E0702 08:08:31.692573 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.693013 kubelet[3344]: W0702 08:08:31.692739 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.693013 kubelet[3344]: E0702 08:08:31.692781 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.694788 kubelet[3344]: E0702 08:08:31.694044 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.694788 kubelet[3344]: W0702 08:08:31.694079 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.694788 kubelet[3344]: E0702 08:08:31.694113 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.694788 kubelet[3344]: E0702 08:08:31.694660 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.694788 kubelet[3344]: W0702 08:08:31.694685 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.694788 kubelet[3344]: E0702 08:08:31.694719 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.742511 kubelet[3344]: E0702 08:08:31.741592 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.742511 kubelet[3344]: W0702 08:08:31.741633 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.742511 kubelet[3344]: E0702 08:08:31.741684 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.747381 kubelet[3344]: E0702 08:08:31.743789 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.747381 kubelet[3344]: W0702 08:08:31.743829 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.747381 kubelet[3344]: E0702 08:08:31.743864 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.759332 kubelet[3344]: E0702 08:08:31.759277 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.761852 kubelet[3344]: W0702 08:08:31.761595 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.761852 kubelet[3344]: E0702 08:08:31.761760 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.795733 kubelet[3344]: E0702 08:08:31.795693 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.795920 kubelet[3344]: W0702 08:08:31.795890 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.796088 kubelet[3344]: E0702 08:08:31.796060 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.797569 kubelet[3344]: E0702 08:08:31.797527 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.797756 kubelet[3344]: W0702 08:08:31.797726 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.797891 kubelet[3344]: E0702 08:08:31.797857 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.798863 kubelet[3344]: E0702 08:08:31.798718 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.799092 kubelet[3344]: W0702 08:08:31.799038 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.799316 kubelet[3344]: E0702 08:08:31.799285 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.829363 kubelet[3344]: E0702 08:08:31.829295 3344 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 08:08:31.829736 kubelet[3344]: E0702 08:08:31.829707 3344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d74a9fcb-547a-42fe-9f3c-2caf90fb3b94-typha-certs podName:d74a9fcb-547a-42fe-9f3c-2caf90fb3b94 nodeName:}" failed. No retries permitted until 2024-07-02 08:08:32.329676245 +0000 UTC m=+25.857537570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/d74a9fcb-547a-42fe-9f3c-2caf90fb3b94-typha-certs") pod "calico-typha-749b68d9f5-v9674" (UID: "d74a9fcb-547a-42fe-9f3c-2caf90fb3b94") : failed to sync secret cache: timed out waiting for the condition Jul 2 08:08:31.829916 kubelet[3344]: E0702 08:08:31.829590 3344 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 2 08:08:31.830081 kubelet[3344]: E0702 08:08:31.830058 3344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d74a9fcb-547a-42fe-9f3c-2caf90fb3b94-tigera-ca-bundle podName:d74a9fcb-547a-42fe-9f3c-2caf90fb3b94 nodeName:}" failed. No retries permitted until 2024-07-02 08:08:32.33003484 +0000 UTC m=+25.857896177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/d74a9fcb-547a-42fe-9f3c-2caf90fb3b94-tigera-ca-bundle") pod "calico-typha-749b68d9f5-v9674" (UID: "d74a9fcb-547a-42fe-9f3c-2caf90fb3b94") : failed to sync configmap cache: timed out waiting for the condition Jul 2 08:08:31.900647 kubelet[3344]: E0702 08:08:31.900584 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.900647 kubelet[3344]: W0702 08:08:31.900635 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.901113 kubelet[3344]: E0702 08:08:31.900671 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.901113 kubelet[3344]: E0702 08:08:31.901085 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.901113 kubelet[3344]: W0702 08:08:31.901105 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.901477 kubelet[3344]: E0702 08:08:31.901128 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:31.902192 kubelet[3344]: E0702 08:08:31.902147 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:31.902192 kubelet[3344]: W0702 08:08:31.902184 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:31.902363 kubelet[3344]: E0702 08:08:31.902218 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.003534 kubelet[3344]: E0702 08:08:32.003364 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.003534 kubelet[3344]: W0702 08:08:32.003412 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.003534 kubelet[3344]: E0702 08:08:32.003449 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.005368 kubelet[3344]: E0702 08:08:32.005303 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.005368 kubelet[3344]: W0702 08:08:32.005346 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.005576 kubelet[3344]: E0702 08:08:32.005383 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.007135 kubelet[3344]: E0702 08:08:32.007079 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.007135 kubelet[3344]: W0702 08:08:32.007121 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.009489 kubelet[3344]: E0702 08:08:32.007173 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.109168 kubelet[3344]: E0702 08:08:32.108827 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.109168 kubelet[3344]: W0702 08:08:32.108865 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.109168 kubelet[3344]: E0702 08:08:32.108899 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.109709 kubelet[3344]: E0702 08:08:32.109673 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.110127 kubelet[3344]: W0702 08:08:32.109852 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.110127 kubelet[3344]: E0702 08:08:32.109896 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.111256 kubelet[3344]: E0702 08:08:32.110945 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.111256 kubelet[3344]: W0702 08:08:32.110984 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.111256 kubelet[3344]: E0702 08:08:32.111021 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.134399 kubelet[3344]: E0702 08:08:32.134340 3344 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 2 08:08:32.134647 kubelet[3344]: E0702 08:08:32.134458 3344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a55bc379-3415-4676-96e8-a4ba7ca4606c-tigera-ca-bundle podName:a55bc379-3415-4676-96e8-a4ba7ca4606c nodeName:}" failed. No retries permitted until 2024-07-02 08:08:32.634427846 +0000 UTC m=+26.162289183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/a55bc379-3415-4676-96e8-a4ba7ca4606c-tigera-ca-bundle") pod "calico-node-hsr9m" (UID: "a55bc379-3415-4676-96e8-a4ba7ca4606c") : failed to sync configmap cache: timed out waiting for the condition Jul 2 08:08:32.213782 kubelet[3344]: E0702 08:08:32.212787 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.213782 kubelet[3344]: W0702 08:08:32.212829 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.213782 kubelet[3344]: E0702 08:08:32.212864 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.215636 kubelet[3344]: E0702 08:08:32.215584 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.215636 kubelet[3344]: W0702 08:08:32.215626 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.215636 kubelet[3344]: E0702 08:08:32.215662 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.217052 kubelet[3344]: E0702 08:08:32.216977 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.217052 kubelet[3344]: W0702 08:08:32.217041 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.217353 kubelet[3344]: E0702 08:08:32.217082 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.318248 kubelet[3344]: E0702 08:08:32.318079 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.318248 kubelet[3344]: W0702 08:08:32.318122 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.318248 kubelet[3344]: E0702 08:08:32.318158 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.319877 kubelet[3344]: E0702 08:08:32.319578 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.319877 kubelet[3344]: W0702 08:08:32.319619 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.319877 kubelet[3344]: E0702 08:08:32.319654 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.320692 kubelet[3344]: E0702 08:08:32.320603 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.321066 kubelet[3344]: W0702 08:08:32.320920 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.321066 kubelet[3344]: E0702 08:08:32.320969 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.422758 kubelet[3344]: E0702 08:08:32.422352 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.422758 kubelet[3344]: W0702 08:08:32.422392 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.422758 kubelet[3344]: E0702 08:08:32.422468 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.423421 kubelet[3344]: E0702 08:08:32.423381 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.423889 kubelet[3344]: W0702 08:08:32.423576 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.423889 kubelet[3344]: E0702 08:08:32.423646 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.424479 kubelet[3344]: E0702 08:08:32.424441 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.424873 kubelet[3344]: W0702 08:08:32.424634 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.424873 kubelet[3344]: E0702 08:08:32.424733 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.425672 kubelet[3344]: E0702 08:08:32.425383 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.425672 kubelet[3344]: W0702 08:08:32.425418 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.425672 kubelet[3344]: E0702 08:08:32.425465 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.426631 kubelet[3344]: E0702 08:08:32.426328 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.426631 kubelet[3344]: W0702 08:08:32.426365 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.426631 kubelet[3344]: E0702 08:08:32.426419 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.427508 kubelet[3344]: E0702 08:08:32.427197 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.427508 kubelet[3344]: W0702 08:08:32.427262 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.428342 kubelet[3344]: E0702 08:08:32.428050 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.428342 kubelet[3344]: W0702 08:08:32.428085 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.428342 kubelet[3344]: E0702 08:08:32.428118 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.428342 kubelet[3344]: E0702 08:08:32.428297 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.429110 kubelet[3344]: E0702 08:08:32.428963 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.429110 kubelet[3344]: W0702 08:08:32.429002 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.429110 kubelet[3344]: E0702 08:08:32.429051 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.429687 kubelet[3344]: E0702 08:08:32.429524 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.429687 kubelet[3344]: W0702 08:08:32.429553 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.429687 kubelet[3344]: E0702 08:08:32.429606 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.430813 kubelet[3344]: E0702 08:08:32.430497 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.430813 kubelet[3344]: W0702 08:08:32.430535 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.430813 kubelet[3344]: E0702 08:08:32.430569 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.432277 kubelet[3344]: E0702 08:08:32.431444 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.432277 kubelet[3344]: W0702 08:08:32.431483 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.432277 kubelet[3344]: E0702 08:08:32.431516 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.438202 kubelet[3344]: E0702 08:08:32.438137 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.438202 kubelet[3344]: W0702 08:08:32.438181 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.438202 kubelet[3344]: E0702 08:08:32.438243 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.448661 kubelet[3344]: E0702 08:08:32.448606 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.448661 kubelet[3344]: W0702 08:08:32.448648 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.448894 kubelet[3344]: E0702 08:08:32.448690 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.526304 kubelet[3344]: E0702 08:08:32.526115 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.526304 kubelet[3344]: W0702 08:08:32.526170 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.526304 kubelet[3344]: E0702 08:08:32.526211 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.531851 containerd[1920]: time="2024-07-02T08:08:32.531081429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-749b68d9f5-v9674,Uid:d74a9fcb-547a-42fe-9f3c-2caf90fb3b94,Namespace:calico-system,Attempt:0,}" Jul 2 08:08:32.624513 containerd[1920]: time="2024-07-02T08:08:32.623908280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:32.626416 containerd[1920]: time="2024-07-02T08:08:32.624076160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:32.626416 containerd[1920]: time="2024-07-02T08:08:32.624389096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:32.626416 containerd[1920]: time="2024-07-02T08:08:32.624434659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:32.642359 kubelet[3344]: E0702 08:08:32.642064 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.642359 kubelet[3344]: W0702 08:08:32.642112 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.642359 kubelet[3344]: E0702 08:08:32.642150 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.646522 kubelet[3344]: E0702 08:08:32.643388 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.646522 kubelet[3344]: W0702 08:08:32.643418 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.646522 kubelet[3344]: E0702 08:08:32.643454 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.648414 kubelet[3344]: E0702 08:08:32.646861 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.648414 kubelet[3344]: W0702 08:08:32.646894 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.648414 kubelet[3344]: E0702 08:08:32.646932 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.651270 kubelet[3344]: E0702 08:08:32.649813 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.652342 kubelet[3344]: W0702 08:08:32.651545 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.652342 kubelet[3344]: E0702 08:08:32.651609 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.657539 kubelet[3344]: E0702 08:08:32.653618 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.657539 kubelet[3344]: W0702 08:08:32.653664 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.657539 kubelet[3344]: E0702 08:08:32.653707 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.658579 kubelet[3344]: E0702 08:08:32.658538 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:32.658765 kubelet[3344]: W0702 08:08:32.658733 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:32.658902 kubelet[3344]: E0702 08:08:32.658873 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:32.682466 kubelet[3344]: E0702 08:08:32.681100 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x68xj" podUID="6f6df60c-e3f9-43ed-b240-8e9935f2d2eb" Jul 2 08:08:32.713907 systemd[1]: run-containerd-runc-k8s.io-79d25ff4c0abf4c1d77c062c9ddc9d19c18c33075f0cd2dddf991de4f93045b9-runc.fqT2gV.mount: Deactivated successfully. Jul 2 08:08:32.733572 systemd[1]: Started cri-containerd-79d25ff4c0abf4c1d77c062c9ddc9d19c18c33075f0cd2dddf991de4f93045b9.scope - libcontainer container 79d25ff4c0abf4c1d77c062c9ddc9d19c18c33075f0cd2dddf991de4f93045b9. Jul 2 08:08:32.796945 containerd[1920]: time="2024-07-02T08:08:32.796842441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hsr9m,Uid:a55bc379-3415-4676-96e8-a4ba7ca4606c,Namespace:calico-system,Attempt:0,}" Jul 2 08:08:32.873683 containerd[1920]: time="2024-07-02T08:08:32.870755351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:32.873683 containerd[1920]: time="2024-07-02T08:08:32.872909900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:32.873683 containerd[1920]: time="2024-07-02T08:08:32.872962318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:32.873683 containerd[1920]: time="2024-07-02T08:08:32.872988035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:32.894709 containerd[1920]: time="2024-07-02T08:08:32.894627986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-749b68d9f5-v9674,Uid:d74a9fcb-547a-42fe-9f3c-2caf90fb3b94,Namespace:calico-system,Attempt:0,} returns sandbox id \"79d25ff4c0abf4c1d77c062c9ddc9d19c18c33075f0cd2dddf991de4f93045b9\"" Jul 2 08:08:32.906101 containerd[1920]: time="2024-07-02T08:08:32.905030945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 08:08:32.932562 systemd[1]: Started cri-containerd-92ee6f0cf4bbd63ddcb903456312f399bf3ea4558b8ea35e94edd02db6503da5.scope - libcontainer container 92ee6f0cf4bbd63ddcb903456312f399bf3ea4558b8ea35e94edd02db6503da5. Jul 2 08:08:33.024168 containerd[1920]: time="2024-07-02T08:08:33.024083522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hsr9m,Uid:a55bc379-3415-4676-96e8-a4ba7ca4606c,Namespace:calico-system,Attempt:0,} returns sandbox id \"92ee6f0cf4bbd63ddcb903456312f399bf3ea4558b8ea35e94edd02db6503da5\"" Jul 2 08:08:34.681203 kubelet[3344]: E0702 08:08:34.679692 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x68xj" podUID="6f6df60c-e3f9-43ed-b240-8e9935f2d2eb" Jul 2 08:08:35.511759 containerd[1920]: time="2024-07-02T08:08:35.511483462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:35.516403 containerd[1920]: time="2024-07-02T08:08:35.515632159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 08:08:35.516403 containerd[1920]: time="2024-07-02T08:08:35.515922452Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:35.523100 containerd[1920]: time="2024-07-02T08:08:35.522152258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:35.527468 containerd[1920]: time="2024-07-02T08:08:35.526626318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.621520623s" Jul 2 08:08:35.527468 containerd[1920]: time="2024-07-02T08:08:35.526715102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 08:08:35.531765 containerd[1920]: time="2024-07-02T08:08:35.530197695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 08:08:35.576821 containerd[1920]: time="2024-07-02T08:08:35.576742063Z" level=info msg="CreateContainer within sandbox \"79d25ff4c0abf4c1d77c062c9ddc9d19c18c33075f0cd2dddf991de4f93045b9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 08:08:35.621183 containerd[1920]: time="2024-07-02T08:08:35.621077520Z" level=info msg="CreateContainer within sandbox \"79d25ff4c0abf4c1d77c062c9ddc9d19c18c33075f0cd2dddf991de4f93045b9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ceb4e43001f16ecebc590275f34cdf6606cb934bf8f7220983ed1f8370387088\"" Jul 2 08:08:35.623978 containerd[1920]: time="2024-07-02T08:08:35.622461548Z" level=info msg="StartContainer for \"ceb4e43001f16ecebc590275f34cdf6606cb934bf8f7220983ed1f8370387088\"" Jul 2 08:08:35.715572 systemd[1]: Started cri-containerd-ceb4e43001f16ecebc590275f34cdf6606cb934bf8f7220983ed1f8370387088.scope - libcontainer container ceb4e43001f16ecebc590275f34cdf6606cb934bf8f7220983ed1f8370387088. Jul 2 08:08:35.886533 containerd[1920]: time="2024-07-02T08:08:35.885809822Z" level=info msg="StartContainer for \"ceb4e43001f16ecebc590275f34cdf6606cb934bf8f7220983ed1f8370387088\" returns successfully" Jul 2 08:08:36.682667 kubelet[3344]: E0702 08:08:36.680349 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x68xj" podUID="6f6df60c-e3f9-43ed-b240-8e9935f2d2eb" Jul 2 08:08:36.867535 containerd[1920]: time="2024-07-02T08:08:36.865360928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:36.870899 containerd[1920]: time="2024-07-02T08:08:36.870648598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 08:08:36.887163 containerd[1920]: time="2024-07-02T08:08:36.886811292Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:36.897537 containerd[1920]: time="2024-07-02T08:08:36.897421906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:36.907093 containerd[1920]: time="2024-07-02T08:08:36.905560042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.375249226s" Jul 2 08:08:36.907093 containerd[1920]: time="2024-07-02T08:08:36.905671925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 08:08:36.916153 containerd[1920]: time="2024-07-02T08:08:36.916075461Z" level=info msg="CreateContainer within sandbox \"92ee6f0cf4bbd63ddcb903456312f399bf3ea4558b8ea35e94edd02db6503da5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 08:08:36.936216 kubelet[3344]: I0702 08:08:36.935994 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-749b68d9f5-v9674" podStartSLOduration=4.309420991 podStartE2EDuration="6.935944214s" podCreationTimestamp="2024-07-02 08:08:30 +0000 UTC" firstStartedPulling="2024-07-02 08:08:32.902803376 +0000 UTC m=+26.430664713" lastFinishedPulling="2024-07-02 08:08:35.529326599 +0000 UTC m=+29.057187936" observedRunningTime="2024-07-02 08:08:36.931863819 +0000 UTC m=+30.459725228" watchObservedRunningTime="2024-07-02 08:08:36.935944214 +0000 UTC m=+30.463805563" Jul 2 08:08:36.971129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2784430219.mount: Deactivated successfully. Jul 2 08:08:36.981172 kubelet[3344]: E0702 08:08:36.979935 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:36.981834 kubelet[3344]: W0702 08:08:36.981667 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:36.982863 kubelet[3344]: E0702 08:08:36.982306 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:36.987685 kubelet[3344]: E0702 08:08:36.987642 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:36.988267 kubelet[3344]: W0702 08:08:36.987882 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:36.989074 kubelet[3344]: E0702 08:08:36.988165 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:36.992023 kubelet[3344]: E0702 08:08:36.991193 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:36.992023 kubelet[3344]: W0702 08:08:36.991687 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:36.993082 kubelet[3344]: E0702 08:08:36.992127 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:36.995457 kubelet[3344]: E0702 08:08:36.994850 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:36.995457 kubelet[3344]: W0702 08:08:36.994886 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:36.995457 kubelet[3344]: E0702 08:08:36.994920 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:36.998209 containerd[1920]: time="2024-07-02T08:08:36.997855591Z" level=info msg="CreateContainer within sandbox \"92ee6f0cf4bbd63ddcb903456312f399bf3ea4558b8ea35e94edd02db6503da5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"af92ef55e6935579c5a18db87be49931a02b333dc3c026a732328f1b6172824b\"" Jul 2 08:08:36.998719 kubelet[3344]: E0702 08:08:36.998297 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:36.998719 kubelet[3344]: W0702 08:08:36.998333 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:36.998719 kubelet[3344]: E0702 08:08:36.998415 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.001216 kubelet[3344]: E0702 08:08:37.000185 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.001216 kubelet[3344]: W0702 08:08:37.000561 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.001216 kubelet[3344]: E0702 08:08:37.000793 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.003606 containerd[1920]: time="2024-07-02T08:08:37.002150882Z" level=info msg="StartContainer for \"af92ef55e6935579c5a18db87be49931a02b333dc3c026a732328f1b6172824b\"" Jul 2 08:08:37.003874 kubelet[3344]: E0702 08:08:37.003692 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.003874 kubelet[3344]: W0702 08:08:37.003728 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.003874 kubelet[3344]: E0702 08:08:37.003764 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.005384 kubelet[3344]: E0702 08:08:37.005327 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.005384 kubelet[3344]: W0702 08:08:37.005372 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.005644 kubelet[3344]: E0702 08:08:37.005409 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.011040 kubelet[3344]: E0702 08:08:37.010950 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.011040 kubelet[3344]: W0702 08:08:37.011001 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.011040 kubelet[3344]: E0702 08:08:37.011043 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.013010 kubelet[3344]: E0702 08:08:37.012665 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.013010 kubelet[3344]: W0702 08:08:37.012710 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.013010 kubelet[3344]: E0702 08:08:37.012747 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.015648 kubelet[3344]: E0702 08:08:37.015155 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.015648 kubelet[3344]: W0702 08:08:37.015198 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.015648 kubelet[3344]: E0702 08:08:37.015256 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.020457 kubelet[3344]: E0702 08:08:37.019262 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.020599 kubelet[3344]: W0702 08:08:37.020109 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.020983 kubelet[3344]: E0702 08:08:37.020658 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.030896 kubelet[3344]: E0702 08:08:37.030638 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.031497 kubelet[3344]: W0702 08:08:37.031286 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.031497 kubelet[3344]: E0702 08:08:37.031383 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.037173 kubelet[3344]: E0702 08:08:37.036848 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.037173 kubelet[3344]: W0702 08:08:37.036894 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.037173 kubelet[3344]: E0702 08:08:37.036933 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.040258 kubelet[3344]: E0702 08:08:37.039464 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.040258 kubelet[3344]: W0702 08:08:37.039513 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.040258 kubelet[3344]: E0702 08:08:37.039552 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.044311 kubelet[3344]: E0702 08:08:37.043399 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.044311 kubelet[3344]: W0702 08:08:37.043441 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.044311 kubelet[3344]: E0702 08:08:37.043477 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.044311 kubelet[3344]: E0702 08:08:37.044058 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.044311 kubelet[3344]: W0702 08:08:37.044105 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.044311 kubelet[3344]: E0702 08:08:37.044152 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.045338 kubelet[3344]: E0702 08:08:37.045286 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.045338 kubelet[3344]: W0702 08:08:37.045329 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.045901 kubelet[3344]: E0702 08:08:37.045822 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.048255 kubelet[3344]: E0702 08:08:37.048002 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.048255 kubelet[3344]: W0702 08:08:37.048045 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.048255 kubelet[3344]: E0702 08:08:37.048095 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.049417 kubelet[3344]: E0702 08:08:37.049373 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.050642 kubelet[3344]: W0702 08:08:37.049743 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.050642 kubelet[3344]: E0702 08:08:37.050366 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.053337 kubelet[3344]: E0702 08:08:37.051725 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.053337 kubelet[3344]: W0702 08:08:37.051771 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.053337 kubelet[3344]: E0702 08:08:37.052513 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.055162 kubelet[3344]: E0702 08:08:37.055091 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.055438 kubelet[3344]: W0702 08:08:37.055390 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.057501 kubelet[3344]: E0702 08:08:37.057454 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.058261 kubelet[3344]: W0702 08:08:37.057849 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.061050 kubelet[3344]: E0702 08:08:37.060731 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.061050 kubelet[3344]: W0702 08:08:37.060805 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.061050 kubelet[3344]: E0702 08:08:37.060875 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.062809 kubelet[3344]: E0702 08:08:37.062720 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.063301 kubelet[3344]: W0702 08:08:37.062760 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.064400 kubelet[3344]: E0702 08:08:37.063144 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.070016 kubelet[3344]: E0702 08:08:37.068933 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.070016 kubelet[3344]: E0702 08:08:37.068997 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.070016 kubelet[3344]: E0702 08:08:37.069111 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.070016 kubelet[3344]: W0702 08:08:37.069133 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.070016 kubelet[3344]: E0702 08:08:37.069165 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.072708 kubelet[3344]: E0702 08:08:37.072662 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.072951 kubelet[3344]: W0702 08:08:37.072913 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.073103 kubelet[3344]: E0702 08:08:37.073074 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.075351 kubelet[3344]: E0702 08:08:37.074724 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.075351 kubelet[3344]: W0702 08:08:37.074776 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.075351 kubelet[3344]: E0702 08:08:37.074813 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.078249 kubelet[3344]: E0702 08:08:37.078183 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.078515 kubelet[3344]: W0702 08:08:37.078424 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.086347 kubelet[3344]: E0702 08:08:37.078662 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.087002 kubelet[3344]: E0702 08:08:37.086959 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.087560 kubelet[3344]: W0702 08:08:37.087515 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.089451 kubelet[3344]: E0702 08:08:37.089393 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.096631 kubelet[3344]: E0702 08:08:37.096333 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.096631 kubelet[3344]: W0702 08:08:37.096374 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.096631 kubelet[3344]: E0702 08:08:37.096409 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.100799 kubelet[3344]: E0702 08:08:37.100747 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.101012 kubelet[3344]: W0702 08:08:37.100971 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.101182 kubelet[3344]: E0702 08:08:37.101149 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.106206 kubelet[3344]: E0702 08:08:37.105521 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:08:37.106206 kubelet[3344]: W0702 08:08:37.105559 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:08:37.106206 kubelet[3344]: E0702 08:08:37.105593 3344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:08:37.155260 systemd[1]: Started cri-containerd-af92ef55e6935579c5a18db87be49931a02b333dc3c026a732328f1b6172824b.scope - libcontainer container af92ef55e6935579c5a18db87be49931a02b333dc3c026a732328f1b6172824b. Jul 2 08:08:37.267822 containerd[1920]: time="2024-07-02T08:08:37.267559782Z" level=info msg="StartContainer for \"af92ef55e6935579c5a18db87be49931a02b333dc3c026a732328f1b6172824b\" returns successfully" Jul 2 08:08:37.319460 systemd[1]: cri-containerd-af92ef55e6935579c5a18db87be49931a02b333dc3c026a732328f1b6172824b.scope: Deactivated successfully. Jul 2 08:08:37.547791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af92ef55e6935579c5a18db87be49931a02b333dc3c026a732328f1b6172824b-rootfs.mount: Deactivated successfully. Jul 2 08:08:37.762772 containerd[1920]: time="2024-07-02T08:08:37.762620520Z" level=info msg="shim disconnected" id=af92ef55e6935579c5a18db87be49931a02b333dc3c026a732328f1b6172824b namespace=k8s.io Jul 2 08:08:37.762772 containerd[1920]: time="2024-07-02T08:08:37.762725572Z" level=warning msg="cleaning up after shim disconnected" id=af92ef55e6935579c5a18db87be49931a02b333dc3c026a732328f1b6172824b namespace=k8s.io Jul 2 08:08:37.762772 containerd[1920]: time="2024-07-02T08:08:37.762750653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:08:37.911259 kubelet[3344]: I0702 08:08:37.911050 3344 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:08:37.916972 containerd[1920]: time="2024-07-02T08:08:37.916573807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 08:08:38.680978 kubelet[3344]: E0702 08:08:38.680336 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x68xj" podUID="6f6df60c-e3f9-43ed-b240-8e9935f2d2eb" Jul 2 08:08:40.681120 kubelet[3344]: E0702 08:08:40.679664 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x68xj" podUID="6f6df60c-e3f9-43ed-b240-8e9935f2d2eb" Jul 2 08:08:41.691728 containerd[1920]: time="2024-07-02T08:08:41.691657019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:41.693414 containerd[1920]: time="2024-07-02T08:08:41.693333273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 08:08:41.694774 containerd[1920]: time="2024-07-02T08:08:41.694648242Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:41.699806 containerd[1920]: time="2024-07-02T08:08:41.699342240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:41.703200 containerd[1920]: time="2024-07-02T08:08:41.703140134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 3.786487724s" Jul 2 08:08:41.703453 containerd[1920]: time="2024-07-02T08:08:41.703418109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 08:08:41.710818 containerd[1920]: time="2024-07-02T08:08:41.710635876Z" level=info msg="CreateContainer within sandbox \"92ee6f0cf4bbd63ddcb903456312f399bf3ea4558b8ea35e94edd02db6503da5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 08:08:41.741789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1462402712.mount: Deactivated successfully. Jul 2 08:08:41.743730 containerd[1920]: time="2024-07-02T08:08:41.742840915Z" level=info msg="CreateContainer within sandbox \"92ee6f0cf4bbd63ddcb903456312f399bf3ea4558b8ea35e94edd02db6503da5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e81a963dfd6c3c3a4ff9306bee7dcc889aea046c6439e53bc1d752fef2b860da\"" Jul 2 08:08:41.746546 containerd[1920]: time="2024-07-02T08:08:41.745688138Z" level=info msg="StartContainer for \"e81a963dfd6c3c3a4ff9306bee7dcc889aea046c6439e53bc1d752fef2b860da\"" Jul 2 08:08:41.796550 systemd[1]: run-containerd-runc-k8s.io-e81a963dfd6c3c3a4ff9306bee7dcc889aea046c6439e53bc1d752fef2b860da-runc.nDSEFU.mount: Deactivated successfully. Jul 2 08:08:41.810888 systemd[1]: Started cri-containerd-e81a963dfd6c3c3a4ff9306bee7dcc889aea046c6439e53bc1d752fef2b860da.scope - libcontainer container e81a963dfd6c3c3a4ff9306bee7dcc889aea046c6439e53bc1d752fef2b860da. Jul 2 08:08:41.884631 containerd[1920]: time="2024-07-02T08:08:41.884351460Z" level=info msg="StartContainer for \"e81a963dfd6c3c3a4ff9306bee7dcc889aea046c6439e53bc1d752fef2b860da\" returns successfully" Jul 2 08:08:42.681262 kubelet[3344]: E0702 08:08:42.679660 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x68xj" podUID="6f6df60c-e3f9-43ed-b240-8e9935f2d2eb" Jul 2 08:08:42.961252 containerd[1920]: time="2024-07-02T08:08:42.960888583Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:08:42.968115 kubelet[3344]: I0702 08:08:42.968064 3344 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 08:08:42.970529 systemd[1]: cri-containerd-e81a963dfd6c3c3a4ff9306bee7dcc889aea046c6439e53bc1d752fef2b860da.scope: Deactivated successfully. Jul 2 08:08:43.026482 kubelet[3344]: I0702 08:08:43.025845 3344 topology_manager.go:215] "Topology Admit Handler" podUID="3c33e4ef-e5bf-43cc-81b9-869ef5820fac" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hwgh6" Jul 2 08:08:43.037463 kubelet[3344]: W0702 08:08:43.037393 3344 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-16-163" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-163' and this object Jul 2 08:08:43.037463 kubelet[3344]: E0702 08:08:43.037454 3344 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-16-163" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-163' and this object Jul 2 08:08:43.051869 kubelet[3344]: I0702 08:08:43.050678 3344 topology_manager.go:215] "Topology Admit Handler" podUID="66390efd-8112-4c91-bea1-23630113ea89" podNamespace="calico-system" podName="calico-kube-controllers-79465c8fc4-tcgmn" Jul 2 08:08:43.051869 kubelet[3344]: I0702 08:08:43.051190 3344 topology_manager.go:215] "Topology Admit Handler" podUID="7f019231-67cc-4115-a1b0-8a99961292ba" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8l6zv" Jul 2 08:08:43.058079 systemd[1]: Created slice kubepods-burstable-pod3c33e4ef_e5bf_43cc_81b9_869ef5820fac.slice - libcontainer container kubepods-burstable-pod3c33e4ef_e5bf_43cc_81b9_869ef5820fac.slice. Jul 2 08:08:43.071865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e81a963dfd6c3c3a4ff9306bee7dcc889aea046c6439e53bc1d752fef2b860da-rootfs.mount: Deactivated successfully. Jul 2 08:08:43.103327 systemd[1]: Created slice kubepods-besteffort-pod66390efd_8112_4c91_bea1_23630113ea89.slice - libcontainer container kubepods-besteffort-pod66390efd_8112_4c91_bea1_23630113ea89.slice. Jul 2 08:08:43.109566 kubelet[3344]: I0702 08:08:43.108448 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57f4w\" (UniqueName: \"kubernetes.io/projected/66390efd-8112-4c91-bea1-23630113ea89-kube-api-access-57f4w\") pod \"calico-kube-controllers-79465c8fc4-tcgmn\" (UID: \"66390efd-8112-4c91-bea1-23630113ea89\") " pod="calico-system/calico-kube-controllers-79465c8fc4-tcgmn" Jul 2 08:08:43.109566 kubelet[3344]: I0702 08:08:43.108528 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c33e4ef-e5bf-43cc-81b9-869ef5820fac-config-volume\") pod \"coredns-7db6d8ff4d-hwgh6\" (UID: \"3c33e4ef-e5bf-43cc-81b9-869ef5820fac\") " pod="kube-system/coredns-7db6d8ff4d-hwgh6" Jul 2 08:08:43.109566 kubelet[3344]: I0702 08:08:43.108573 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66390efd-8112-4c91-bea1-23630113ea89-tigera-ca-bundle\") pod \"calico-kube-controllers-79465c8fc4-tcgmn\" (UID: \"66390efd-8112-4c91-bea1-23630113ea89\") " pod="calico-system/calico-kube-controllers-79465c8fc4-tcgmn" Jul 2 08:08:43.109566 kubelet[3344]: I0702 08:08:43.108617 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4sb2\" (UniqueName: \"kubernetes.io/projected/7f019231-67cc-4115-a1b0-8a99961292ba-kube-api-access-r4sb2\") pod \"coredns-7db6d8ff4d-8l6zv\" (UID: \"7f019231-67cc-4115-a1b0-8a99961292ba\") " pod="kube-system/coredns-7db6d8ff4d-8l6zv" Jul 2 08:08:43.109566 kubelet[3344]: I0702 08:08:43.108685 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f019231-67cc-4115-a1b0-8a99961292ba-config-volume\") pod \"coredns-7db6d8ff4d-8l6zv\" (UID: \"7f019231-67cc-4115-a1b0-8a99961292ba\") " pod="kube-system/coredns-7db6d8ff4d-8l6zv" Jul 2 08:08:43.112604 kubelet[3344]: I0702 08:08:43.108724 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z52s\" (UniqueName: \"kubernetes.io/projected/3c33e4ef-e5bf-43cc-81b9-869ef5820fac-kube-api-access-6z52s\") pod \"coredns-7db6d8ff4d-hwgh6\" (UID: \"3c33e4ef-e5bf-43cc-81b9-869ef5820fac\") " pod="kube-system/coredns-7db6d8ff4d-hwgh6" Jul 2 08:08:43.130562 systemd[1]: Created slice kubepods-burstable-pod7f019231_67cc_4115_a1b0_8a99961292ba.slice - libcontainer container kubepods-burstable-pod7f019231_67cc_4115_a1b0_8a99961292ba.slice. Jul 2 08:08:43.415783 containerd[1920]: time="2024-07-02T08:08:43.415634782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79465c8fc4-tcgmn,Uid:66390efd-8112-4c91-bea1-23630113ea89,Namespace:calico-system,Attempt:0,}" Jul 2 08:08:43.654528 containerd[1920]: time="2024-07-02T08:08:43.654153219Z" level=info msg="shim disconnected" id=e81a963dfd6c3c3a4ff9306bee7dcc889aea046c6439e53bc1d752fef2b860da namespace=k8s.io Jul 2 08:08:43.654528 containerd[1920]: time="2024-07-02T08:08:43.654280855Z" level=warning msg="cleaning up after shim disconnected" id=e81a963dfd6c3c3a4ff9306bee7dcc889aea046c6439e53bc1d752fef2b860da namespace=k8s.io Jul 2 08:08:43.654528 containerd[1920]: time="2024-07-02T08:08:43.654304290Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:08:43.702479 containerd[1920]: time="2024-07-02T08:08:43.702165549Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:08:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 08:08:43.780992 containerd[1920]: time="2024-07-02T08:08:43.780885326Z" level=error msg="Failed to destroy network for sandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:43.781637 containerd[1920]: time="2024-07-02T08:08:43.781585588Z" level=error msg="encountered an error cleaning up failed sandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:43.781757 containerd[1920]: time="2024-07-02T08:08:43.781673976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79465c8fc4-tcgmn,Uid:66390efd-8112-4c91-bea1-23630113ea89,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:43.782041 kubelet[3344]: E0702 08:08:43.781972 3344 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:43.783494 kubelet[3344]: E0702 08:08:43.782077 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79465c8fc4-tcgmn" Jul 2 08:08:43.783494 kubelet[3344]: E0702 08:08:43.782115 3344 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79465c8fc4-tcgmn" Jul 2 08:08:43.783494 kubelet[3344]: E0702 08:08:43.782190 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79465c8fc4-tcgmn_calico-system(66390efd-8112-4c91-bea1-23630113ea89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79465c8fc4-tcgmn_calico-system(66390efd-8112-4c91-bea1-23630113ea89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79465c8fc4-tcgmn" podUID="66390efd-8112-4c91-bea1-23630113ea89" Jul 2 08:08:43.939583 kubelet[3344]: I0702 08:08:43.939540 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:08:43.943447 containerd[1920]: time="2024-07-02T08:08:43.941738738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 08:08:43.947618 containerd[1920]: time="2024-07-02T08:08:43.945375247Z" level=info msg="StopPodSandbox for \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\"" Jul 2 08:08:43.947618 containerd[1920]: time="2024-07-02T08:08:43.946359427Z" level=info msg="Ensure that sandbox a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a in task-service has been cleanup successfully" Jul 2 08:08:44.030695 containerd[1920]: time="2024-07-02T08:08:44.030498796Z" level=error msg="StopPodSandbox for \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\" failed" error="failed to destroy network for sandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.032935 kubelet[3344]: E0702 08:08:44.032731 3344 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:08:44.033911 kubelet[3344]: E0702 08:08:44.032821 3344 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a"} Jul 2 08:08:44.033911 kubelet[3344]: E0702 08:08:44.033506 3344 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66390efd-8112-4c91-bea1-23630113ea89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:08:44.033911 kubelet[3344]: E0702 08:08:44.033555 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66390efd-8112-4c91-bea1-23630113ea89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79465c8fc4-tcgmn" podUID="66390efd-8112-4c91-bea1-23630113ea89" Jul 2 08:08:44.067654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a-shm.mount: Deactivated successfully. Jul 2 08:08:44.283103 containerd[1920]: time="2024-07-02T08:08:44.282947615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hwgh6,Uid:3c33e4ef-e5bf-43cc-81b9-869ef5820fac,Namespace:kube-system,Attempt:0,}" Jul 2 08:08:44.340533 containerd[1920]: time="2024-07-02T08:08:44.340414299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8l6zv,Uid:7f019231-67cc-4115-a1b0-8a99961292ba,Namespace:kube-system,Attempt:0,}" Jul 2 08:08:44.456596 containerd[1920]: time="2024-07-02T08:08:44.456502499Z" level=error msg="Failed to destroy network for sandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.457335 containerd[1920]: time="2024-07-02T08:08:44.457175603Z" level=error msg="encountered an error cleaning up failed sandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.458345 containerd[1920]: time="2024-07-02T08:08:44.458149938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hwgh6,Uid:3c33e4ef-e5bf-43cc-81b9-869ef5820fac,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.458905 kubelet[3344]: E0702 08:08:44.458800 3344 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.459057 kubelet[3344]: E0702 08:08:44.458915 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hwgh6" Jul 2 08:08:44.459121 kubelet[3344]: E0702 08:08:44.459066 3344 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hwgh6" Jul 2 08:08:44.459203 kubelet[3344]: E0702 08:08:44.459161 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hwgh6_kube-system(3c33e4ef-e5bf-43cc-81b9-869ef5820fac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hwgh6_kube-system(3c33e4ef-e5bf-43cc-81b9-869ef5820fac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hwgh6" podUID="3c33e4ef-e5bf-43cc-81b9-869ef5820fac" Jul 2 08:08:44.500458 containerd[1920]: time="2024-07-02T08:08:44.500309932Z" level=error msg="Failed to destroy network for sandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.501021 containerd[1920]: time="2024-07-02T08:08:44.500951484Z" level=error msg="encountered an error cleaning up failed sandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.501397 containerd[1920]: time="2024-07-02T08:08:44.501067306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8l6zv,Uid:7f019231-67cc-4115-a1b0-8a99961292ba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.501642 kubelet[3344]: E0702 08:08:44.501556 3344 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.501732 kubelet[3344]: E0702 08:08:44.501665 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8l6zv" Jul 2 08:08:44.501732 kubelet[3344]: E0702 08:08:44.501701 3344 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8l6zv" Jul 2 08:08:44.502093 kubelet[3344]: E0702 08:08:44.501780 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8l6zv_kube-system(7f019231-67cc-4115-a1b0-8a99961292ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8l6zv_kube-system(7f019231-67cc-4115-a1b0-8a99961292ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8l6zv" podUID="7f019231-67cc-4115-a1b0-8a99961292ba" Jul 2 08:08:44.692338 systemd[1]: Created slice kubepods-besteffort-pod6f6df60c_e3f9_43ed_b240_8e9935f2d2eb.slice - libcontainer container kubepods-besteffort-pod6f6df60c_e3f9_43ed_b240_8e9935f2d2eb.slice. Jul 2 08:08:44.697308 containerd[1920]: time="2024-07-02T08:08:44.697180359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x68xj,Uid:6f6df60c-e3f9-43ed-b240-8e9935f2d2eb,Namespace:calico-system,Attempt:0,}" Jul 2 08:08:44.853618 containerd[1920]: time="2024-07-02T08:08:44.853474396Z" level=error msg="Failed to destroy network for sandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.854847 containerd[1920]: time="2024-07-02T08:08:44.854719118Z" level=error msg="encountered an error cleaning up failed sandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.855049 containerd[1920]: time="2024-07-02T08:08:44.854900973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x68xj,Uid:6f6df60c-e3f9-43ed-b240-8e9935f2d2eb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.855588 kubelet[3344]: E0702 08:08:44.855502 3344 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:44.857000 kubelet[3344]: E0702 08:08:44.855608 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x68xj" Jul 2 08:08:44.857000 kubelet[3344]: E0702 08:08:44.855649 3344 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x68xj" Jul 2 08:08:44.857000 kubelet[3344]: E0702 08:08:44.855727 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x68xj_calico-system(6f6df60c-e3f9-43ed-b240-8e9935f2d2eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x68xj_calico-system(6f6df60c-e3f9-43ed-b240-8e9935f2d2eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x68xj" podUID="6f6df60c-e3f9-43ed-b240-8e9935f2d2eb" Jul 2 08:08:44.950986 kubelet[3344]: I0702 08:08:44.950820 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:08:44.955272 containerd[1920]: time="2024-07-02T08:08:44.953546928Z" level=info msg="StopPodSandbox for \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\"" Jul 2 08:08:44.955272 containerd[1920]: time="2024-07-02T08:08:44.953889087Z" level=info msg="Ensure that sandbox 9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d in task-service has been cleanup successfully" Jul 2 08:08:44.958270 kubelet[3344]: I0702 08:08:44.956326 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:08:44.958759 containerd[1920]: time="2024-07-02T08:08:44.958708692Z" level=info msg="StopPodSandbox for \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\"" Jul 2 08:08:44.963785 kubelet[3344]: I0702 08:08:44.963689 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:08:44.964582 containerd[1920]: time="2024-07-02T08:08:44.964478247Z" level=info msg="Ensure that sandbox e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176 in task-service has been cleanup successfully" Jul 2 08:08:44.968377 containerd[1920]: time="2024-07-02T08:08:44.966581794Z" level=info msg="StopPodSandbox for \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\"" Jul 2 08:08:44.968377 containerd[1920]: time="2024-07-02T08:08:44.966944916Z" level=info msg="Ensure that sandbox 28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f in task-service has been cleanup successfully" Jul 2 08:08:45.072941 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176-shm.mount: Deactivated successfully. Jul 2 08:08:45.073148 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f-shm.mount: Deactivated successfully. Jul 2 08:08:45.079385 containerd[1920]: time="2024-07-02T08:08:45.079315338Z" level=error msg="StopPodSandbox for \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\" failed" error="failed to destroy network for sandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:45.081009 kubelet[3344]: E0702 08:08:45.080761 3344 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:08:45.081009 kubelet[3344]: E0702 08:08:45.080850 3344 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176"} Jul 2 08:08:45.081009 kubelet[3344]: E0702 08:08:45.080910 3344 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f019231-67cc-4115-a1b0-8a99961292ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:08:45.081009 kubelet[3344]: E0702 08:08:45.080952 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f019231-67cc-4115-a1b0-8a99961292ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8l6zv" podUID="7f019231-67cc-4115-a1b0-8a99961292ba" Jul 2 08:08:45.097737 containerd[1920]: time="2024-07-02T08:08:45.097637646Z" level=error msg="StopPodSandbox for \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\" failed" error="failed to destroy network for sandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:45.098036 kubelet[3344]: E0702 08:08:45.097965 3344 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:08:45.098160 kubelet[3344]: E0702 08:08:45.098049 3344 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f"} Jul 2 08:08:45.098160 kubelet[3344]: E0702 08:08:45.098112 3344 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c33e4ef-e5bf-43cc-81b9-869ef5820fac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:08:45.098389 kubelet[3344]: E0702 08:08:45.098153 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c33e4ef-e5bf-43cc-81b9-869ef5820fac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hwgh6" podUID="3c33e4ef-e5bf-43cc-81b9-869ef5820fac" Jul 2 08:08:45.117730 containerd[1920]: time="2024-07-02T08:08:45.117656211Z" level=error msg="StopPodSandbox for \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\" failed" error="failed to destroy network for sandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:08:45.118528 kubelet[3344]: E0702 08:08:45.118279 3344 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:08:45.118528 kubelet[3344]: E0702 08:08:45.118361 3344 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d"} Jul 2 08:08:45.118528 kubelet[3344]: E0702 08:08:45.118425 3344 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f6df60c-e3f9-43ed-b240-8e9935f2d2eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:08:45.118528 kubelet[3344]: E0702 08:08:45.118464 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f6df60c-e3f9-43ed-b240-8e9935f2d2eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x68xj" podUID="6f6df60c-e3f9-43ed-b240-8e9935f2d2eb" Jul 2 08:08:45.891769 systemd[1]: Started sshd@7-172.31.16.163:22-139.178.89.65:39688.service - OpenSSH per-connection server daemon (139.178.89.65:39688). Jul 2 08:08:46.086037 sshd[4343]: Accepted publickey for core from 139.178.89.65 port 39688 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:08:46.089759 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:08:46.100579 systemd-logind[1911]: New session 8 of user core. Jul 2 08:08:46.106817 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 08:08:46.209315 kubelet[3344]: I0702 08:08:46.208660 3344 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:08:46.491361 sshd[4343]: pam_unix(sshd:session): session closed for user core Jul 2 08:08:46.504921 systemd[1]: sshd@7-172.31.16.163:22-139.178.89.65:39688.service: Deactivated successfully. Jul 2 08:08:46.512199 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:08:46.515173 systemd-logind[1911]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:08:46.518575 systemd-logind[1911]: Removed session 8. Jul 2 08:08:50.617839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611930895.mount: Deactivated successfully. Jul 2 08:08:50.685479 containerd[1920]: time="2024-07-02T08:08:50.685411723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:50.687452 containerd[1920]: time="2024-07-02T08:08:50.687382495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 08:08:50.692060 containerd[1920]: time="2024-07-02T08:08:50.690174415Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:50.696099 containerd[1920]: time="2024-07-02T08:08:50.696003211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:08:50.697703 containerd[1920]: time="2024-07-02T08:08:50.697649983Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 6.7558372s" Jul 2 08:08:50.697900 containerd[1920]: time="2024-07-02T08:08:50.697867567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 08:08:50.727733 containerd[1920]: time="2024-07-02T08:08:50.727680211Z" level=info msg="CreateContainer within sandbox \"92ee6f0cf4bbd63ddcb903456312f399bf3ea4558b8ea35e94edd02db6503da5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 08:08:50.761210 containerd[1920]: time="2024-07-02T08:08:50.761148715Z" level=info msg="CreateContainer within sandbox \"92ee6f0cf4bbd63ddcb903456312f399bf3ea4558b8ea35e94edd02db6503da5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ea6b0d0ff6e0bf122cd9557ce0b87f8fc3f204ed9f17e6e9c17b3e7e338c6bb9\"" Jul 2 08:08:50.762604 containerd[1920]: time="2024-07-02T08:08:50.762538615Z" level=info msg="StartContainer for \"ea6b0d0ff6e0bf122cd9557ce0b87f8fc3f204ed9f17e6e9c17b3e7e338c6bb9\"" Jul 2 08:08:50.814559 systemd[1]: Started cri-containerd-ea6b0d0ff6e0bf122cd9557ce0b87f8fc3f204ed9f17e6e9c17b3e7e338c6bb9.scope - libcontainer container ea6b0d0ff6e0bf122cd9557ce0b87f8fc3f204ed9f17e6e9c17b3e7e338c6bb9. Jul 2 08:08:50.877574 containerd[1920]: time="2024-07-02T08:08:50.877191500Z" level=info msg="StartContainer for \"ea6b0d0ff6e0bf122cd9557ce0b87f8fc3f204ed9f17e6e9c17b3e7e338c6bb9\" returns successfully" Jul 2 08:08:51.025061 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 08:08:51.025193 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 08:08:51.539142 systemd[1]: Started sshd@8-172.31.16.163:22-139.178.89.65:40240.service - OpenSSH per-connection server daemon (139.178.89.65:40240). Jul 2 08:08:51.738401 sshd[4446]: Accepted publickey for core from 139.178.89.65 port 40240 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:08:51.744305 sshd[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:08:51.754611 systemd-logind[1911]: New session 9 of user core. Jul 2 08:08:51.767528 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 08:08:52.051542 sshd[4446]: pam_unix(sshd:session): session closed for user core Jul 2 08:08:52.058521 systemd[1]: sshd@8-172.31.16.163:22-139.178.89.65:40240.service: Deactivated successfully. Jul 2 08:08:52.070044 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:08:52.076957 systemd-logind[1911]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:08:52.080170 systemd-logind[1911]: Removed session 9. Jul 2 08:08:53.587725 systemd-networkd[1842]: vxlan.calico: Link UP Jul 2 08:08:53.587744 systemd-networkd[1842]: vxlan.calico: Gained carrier Jul 2 08:08:53.588861 (udev-worker)[4410]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:08:53.639708 (udev-worker)[4408]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:08:55.293603 systemd-networkd[1842]: vxlan.calico: Gained IPv6LL Jul 2 08:08:55.680909 containerd[1920]: time="2024-07-02T08:08:55.680054064Z" level=info msg="StopPodSandbox for \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\"" Jul 2 08:08:55.790737 kubelet[3344]: I0702 08:08:55.790625 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hsr9m" podStartSLOduration=8.119023939 podStartE2EDuration="25.790599516s" podCreationTimestamp="2024-07-02 08:08:30 +0000 UTC" firstStartedPulling="2024-07-02 08:08:33.02775671 +0000 UTC m=+26.555618047" lastFinishedPulling="2024-07-02 08:08:50.699332299 +0000 UTC m=+44.227193624" observedRunningTime="2024-07-02 08:08:51.056960261 +0000 UTC m=+44.584821634" watchObservedRunningTime="2024-07-02 08:08:55.790599516 +0000 UTC m=+49.318460865" Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.787 [INFO][4691] k8s.go 608: Cleaning up netns ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.788 [INFO][4691] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" iface="eth0" netns="/var/run/netns/cni-3251e254-25f9-7cf9-24b8-e18308923e3e" Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.791 [INFO][4691] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" iface="eth0" netns="/var/run/netns/cni-3251e254-25f9-7cf9-24b8-e18308923e3e" Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.794 [INFO][4691] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" iface="eth0" netns="/var/run/netns/cni-3251e254-25f9-7cf9-24b8-e18308923e3e" Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.794 [INFO][4691] k8s.go 615: Releasing IP address(es) ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.794 [INFO][4691] utils.go 188: Calico CNI releasing IP address ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.839 [INFO][4697] ipam_plugin.go 411: Releasing address using handleID ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" HandleID="k8s-pod-network.e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.840 [INFO][4697] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.840 [INFO][4697] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.856 [WARNING][4697] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" HandleID="k8s-pod-network.e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.856 [INFO][4697] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" HandleID="k8s-pod-network.e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.858 [INFO][4697] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:08:55.864761 containerd[1920]: 2024-07-02 08:08:55.861 [INFO][4691] k8s.go 621: Teardown processing complete. ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:08:55.869427 containerd[1920]: time="2024-07-02T08:08:55.867409309Z" level=info msg="TearDown network for sandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\" successfully" Jul 2 08:08:55.869427 containerd[1920]: time="2024-07-02T08:08:55.867485677Z" level=info msg="StopPodSandbox for \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\" returns successfully" Jul 2 08:08:55.870535 containerd[1920]: time="2024-07-02T08:08:55.870043873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8l6zv,Uid:7f019231-67cc-4115-a1b0-8a99961292ba,Namespace:kube-system,Attempt:1,}" Jul 2 08:08:55.872557 systemd[1]: run-netns-cni\x2d3251e254\x2d25f9\x2d7cf9\x2d24b8\x2de18308923e3e.mount: Deactivated successfully. Jul 2 08:08:56.128624 systemd-networkd[1842]: cali3dfc40b37a6: Link UP Jul 2 08:08:56.129050 systemd-networkd[1842]: cali3dfc40b37a6: Gained carrier Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:55.989 [INFO][4705] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0 coredns-7db6d8ff4d- kube-system 7f019231-67cc-4115-a1b0-8a99961292ba 789 0 2024-07-02 08:08:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-163 coredns-7db6d8ff4d-8l6zv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3dfc40b37a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8l6zv" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-" Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:55.991 [INFO][4705] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8l6zv" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.051 [INFO][4716] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" HandleID="k8s-pod-network.57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.071 [INFO][4716] ipam_plugin.go 264: Auto assigning IP ContainerID="57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" HandleID="k8s-pod-network.57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003029b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-163", "pod":"coredns-7db6d8ff4d-8l6zv", "timestamp":"2024-07-02 08:08:56.051794074 +0000 UTC"}, Hostname:"ip-172-31-16-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.071 [INFO][4716] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.071 [INFO][4716] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.071 [INFO][4716] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-163' Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.074 [INFO][4716] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" host="ip-172-31-16-163" Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.080 [INFO][4716] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-163" Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.087 [INFO][4716] ipam.go 489: Trying affinity for 192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.091 [INFO][4716] ipam.go 155: Attempting to load block cidr=192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.095 [INFO][4716] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.095 [INFO][4716] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" host="ip-172-31-16-163" Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.098 [INFO][4716] ipam.go 1685: Creating new handle: k8s-pod-network.57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.104 [INFO][4716] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" host="ip-172-31-16-163" Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.113 [INFO][4716] ipam.go 1216: Successfully claimed IPs: [192.168.5.65/26] block=192.168.5.64/26 handle="k8s-pod-network.57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" host="ip-172-31-16-163" Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.114 [INFO][4716] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.65/26] handle="k8s-pod-network.57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" host="ip-172-31-16-163" Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.114 [INFO][4716] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:08:56.157162 containerd[1920]: 2024-07-02 08:08:56.114 [INFO][4716] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.5.65/26] IPv6=[] ContainerID="57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" HandleID="k8s-pod-network.57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:08:56.158329 containerd[1920]: 2024-07-02 08:08:56.120 [INFO][4705] k8s.go 386: Populated endpoint ContainerID="57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8l6zv" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7f019231-67cc-4115-a1b0-8a99961292ba", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"", Pod:"coredns-7db6d8ff4d-8l6zv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dfc40b37a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:08:56.158329 containerd[1920]: 2024-07-02 08:08:56.120 [INFO][4705] k8s.go 387: Calico CNI using IPs: [192.168.5.65/32] ContainerID="57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8l6zv" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:08:56.158329 containerd[1920]: 2024-07-02 08:08:56.120 [INFO][4705] dataplane_linux.go 68: Setting the host side veth name to cali3dfc40b37a6 ContainerID="57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8l6zv" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:08:56.158329 containerd[1920]: 2024-07-02 08:08:56.124 [INFO][4705] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8l6zv" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:08:56.158329 containerd[1920]: 2024-07-02 08:08:56.125 [INFO][4705] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8l6zv" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7f019231-67cc-4115-a1b0-8a99961292ba", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a", Pod:"coredns-7db6d8ff4d-8l6zv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dfc40b37a6", MAC:"36:26:a0:b9:70:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:08:56.158329 containerd[1920]: 2024-07-02 08:08:56.147 [INFO][4705] k8s.go 500: Wrote updated endpoint to datastore ContainerID="57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8l6zv" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:08:56.214801 containerd[1920]: time="2024-07-02T08:08:56.214365418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:56.214801 containerd[1920]: time="2024-07-02T08:08:56.214472110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:56.214801 containerd[1920]: time="2024-07-02T08:08:56.214523062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:56.214801 containerd[1920]: time="2024-07-02T08:08:56.214548922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:56.276872 systemd[1]: Started cri-containerd-57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a.scope - libcontainer container 57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a. Jul 2 08:08:56.367751 containerd[1920]: time="2024-07-02T08:08:56.367689311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8l6zv,Uid:7f019231-67cc-4115-a1b0-8a99961292ba,Namespace:kube-system,Attempt:1,} returns sandbox id \"57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a\"" Jul 2 08:08:56.375259 containerd[1920]: time="2024-07-02T08:08:56.375156299Z" level=info msg="CreateContainer within sandbox \"57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:08:56.435157 containerd[1920]: time="2024-07-02T08:08:56.433968888Z" level=info msg="CreateContainer within sandbox \"57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2da9c949f5e9adede1a4ed45429a2472a7a1fa2e2987f9ed1c1e416fe0cdece5\"" Jul 2 08:08:56.435157 containerd[1920]: time="2024-07-02T08:08:56.435138624Z" level=info msg="StartContainer for \"2da9c949f5e9adede1a4ed45429a2472a7a1fa2e2987f9ed1c1e416fe0cdece5\"" Jul 2 08:08:56.496585 systemd[1]: Started cri-containerd-2da9c949f5e9adede1a4ed45429a2472a7a1fa2e2987f9ed1c1e416fe0cdece5.scope - libcontainer container 2da9c949f5e9adede1a4ed45429a2472a7a1fa2e2987f9ed1c1e416fe0cdece5. Jul 2 08:08:56.575370 containerd[1920]: time="2024-07-02T08:08:56.575305392Z" level=info msg="StartContainer for \"2da9c949f5e9adede1a4ed45429a2472a7a1fa2e2987f9ed1c1e416fe0cdece5\" returns successfully" Jul 2 08:08:56.685138 containerd[1920]: time="2024-07-02T08:08:56.683388121Z" level=info msg="StopPodSandbox for \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\"" Jul 2 08:08:56.881497 systemd[1]: run-containerd-runc-k8s.io-57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a-runc.qPPyLR.mount: Deactivated successfully. Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.802 [INFO][4823] k8s.go 608: Cleaning up netns ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.803 [INFO][4823] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" iface="eth0" netns="/var/run/netns/cni-f5a472d9-4017-4361-8d39-45995e3bcc2e" Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.804 [INFO][4823] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" iface="eth0" netns="/var/run/netns/cni-f5a472d9-4017-4361-8d39-45995e3bcc2e" Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.804 [INFO][4823] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" iface="eth0" netns="/var/run/netns/cni-f5a472d9-4017-4361-8d39-45995e3bcc2e" Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.804 [INFO][4823] k8s.go 615: Releasing IP address(es) ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.804 [INFO][4823] utils.go 188: Calico CNI releasing IP address ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.888 [INFO][4829] ipam_plugin.go 411: Releasing address using handleID ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" HandleID="k8s-pod-network.28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.889 [INFO][4829] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.889 [INFO][4829] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.910 [WARNING][4829] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" HandleID="k8s-pod-network.28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.910 [INFO][4829] ipam_plugin.go 439: Releasing address using workloadID ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" HandleID="k8s-pod-network.28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.913 [INFO][4829] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:08:56.921422 containerd[1920]: 2024-07-02 08:08:56.918 [INFO][4823] k8s.go 621: Teardown processing complete. ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:08:56.927672 containerd[1920]: time="2024-07-02T08:08:56.921651014Z" level=info msg="TearDown network for sandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\" successfully" Jul 2 08:08:56.927672 containerd[1920]: time="2024-07-02T08:08:56.921695090Z" level=info msg="StopPodSandbox for \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\" returns successfully" Jul 2 08:08:56.927672 containerd[1920]: time="2024-07-02T08:08:56.927153710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hwgh6,Uid:3c33e4ef-e5bf-43cc-81b9-869ef5820fac,Namespace:kube-system,Attempt:1,}" Jul 2 08:08:56.929156 systemd[1]: run-netns-cni\x2df5a472d9\x2d4017\x2d4361\x2d8d39\x2d45995e3bcc2e.mount: Deactivated successfully. Jul 2 08:08:57.108789 systemd[1]: Started sshd@9-172.31.16.163:22-139.178.89.65:40246.service - OpenSSH per-connection server daemon (139.178.89.65:40246). Jul 2 08:08:57.131615 kubelet[3344]: I0702 08:08:57.128404 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8l6zv" podStartSLOduration=36.128380547 podStartE2EDuration="36.128380547s" podCreationTimestamp="2024-07-02 08:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:08:57.075103643 +0000 UTC m=+50.602964992" watchObservedRunningTime="2024-07-02 08:08:57.128380547 +0000 UTC m=+50.656241884" Jul 2 08:08:57.346208 sshd[4845]: Accepted publickey for core from 139.178.89.65 port 40246 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:08:57.350944 sshd[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:08:57.366806 systemd-logind[1911]: New session 10 of user core. Jul 2 08:08:57.377804 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 08:08:57.408175 systemd-networkd[1842]: cali9865369c958: Link UP Jul 2 08:08:57.411337 systemd-networkd[1842]: cali9865369c958: Gained carrier Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.168 [INFO][4839] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0 coredns-7db6d8ff4d- kube-system 3c33e4ef-e5bf-43cc-81b9-869ef5820fac 803 0 2024-07-02 08:08:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-163 coredns-7db6d8ff4d-hwgh6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9865369c958 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hwgh6" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-" Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.180 [INFO][4839] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hwgh6" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.306 [INFO][4853] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" HandleID="k8s-pod-network.f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.330 [INFO][4853] ipam_plugin.go 264: Auto assigning IP ContainerID="f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" HandleID="k8s-pod-network.f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000244be0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-163", "pod":"coredns-7db6d8ff4d-hwgh6", "timestamp":"2024-07-02 08:08:57.306924384 +0000 UTC"}, Hostname:"ip-172-31-16-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.330 [INFO][4853] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.330 [INFO][4853] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.330 [INFO][4853] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-163' Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.334 [INFO][4853] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" host="ip-172-31-16-163" Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.343 [INFO][4853] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-163" Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.353 [INFO][4853] ipam.go 489: Trying affinity for 192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.358 [INFO][4853] ipam.go 155: Attempting to load block cidr=192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.367 [INFO][4853] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.367 [INFO][4853] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" host="ip-172-31-16-163" Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.373 [INFO][4853] ipam.go 1685: Creating new handle: k8s-pod-network.f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118 Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.385 [INFO][4853] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" host="ip-172-31-16-163" Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.396 [INFO][4853] ipam.go 1216: Successfully claimed IPs: [192.168.5.66/26] block=192.168.5.64/26 handle="k8s-pod-network.f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" host="ip-172-31-16-163" Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.396 [INFO][4853] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.66/26] handle="k8s-pod-network.f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" host="ip-172-31-16-163" Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.396 [INFO][4853] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:08:57.449821 containerd[1920]: 2024-07-02 08:08:57.397 [INFO][4853] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.5.66/26] IPv6=[] ContainerID="f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" HandleID="k8s-pod-network.f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:08:57.453211 containerd[1920]: 2024-07-02 08:08:57.401 [INFO][4839] k8s.go 386: Populated endpoint ContainerID="f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hwgh6" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3c33e4ef-e5bf-43cc-81b9-869ef5820fac", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"", Pod:"coredns-7db6d8ff4d-hwgh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9865369c958", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:08:57.453211 containerd[1920]: 2024-07-02 08:08:57.401 [INFO][4839] k8s.go 387: Calico CNI using IPs: [192.168.5.66/32] ContainerID="f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hwgh6" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:08:57.453211 containerd[1920]: 2024-07-02 08:08:57.401 [INFO][4839] dataplane_linux.go 68: Setting the host side veth name to cali9865369c958 ContainerID="f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hwgh6" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:08:57.453211 containerd[1920]: 2024-07-02 08:08:57.410 [INFO][4839] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hwgh6" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:08:57.453211 containerd[1920]: 2024-07-02 08:08:57.410 [INFO][4839] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hwgh6" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3c33e4ef-e5bf-43cc-81b9-869ef5820fac", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118", Pod:"coredns-7db6d8ff4d-hwgh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9865369c958", MAC:"ae:69:da:3a:b3:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:08:57.453211 containerd[1920]: 2024-07-02 08:08:57.442 [INFO][4839] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hwgh6" WorkloadEndpoint="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:08:57.470152 systemd-networkd[1842]: cali3dfc40b37a6: Gained IPv6LL Jul 2 08:08:57.521796 containerd[1920]: time="2024-07-02T08:08:57.521498689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:57.523748 containerd[1920]: time="2024-07-02T08:08:57.521704801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:57.523748 containerd[1920]: time="2024-07-02T08:08:57.522434989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:57.523748 containerd[1920]: time="2024-07-02T08:08:57.522489229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:57.601033 systemd[1]: Started cri-containerd-f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118.scope - libcontainer container f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118. Jul 2 08:08:57.728299 containerd[1920]: time="2024-07-02T08:08:57.728103206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hwgh6,Uid:3c33e4ef-e5bf-43cc-81b9-869ef5820fac,Namespace:kube-system,Attempt:1,} returns sandbox id \"f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118\"" Jul 2 08:08:57.741084 containerd[1920]: time="2024-07-02T08:08:57.740743238Z" level=info msg="CreateContainer within sandbox \"f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:08:57.771771 containerd[1920]: time="2024-07-02T08:08:57.771583946Z" level=info msg="CreateContainer within sandbox \"f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e51deb75e01116a9657a346d98badf1509134059cc30c7a5a1883b80e4a04c4\"" Jul 2 08:08:57.772552 containerd[1920]: time="2024-07-02T08:08:57.772455446Z" level=info msg="StartContainer for \"2e51deb75e01116a9657a346d98badf1509134059cc30c7a5a1883b80e4a04c4\"" Jul 2 08:08:57.791333 sshd[4845]: pam_unix(sshd:session): session closed for user core Jul 2 08:08:57.800787 systemd-logind[1911]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:08:57.801321 systemd[1]: sshd@9-172.31.16.163:22-139.178.89.65:40246.service: Deactivated successfully. Jul 2 08:08:57.808329 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:08:57.816463 systemd-logind[1911]: Removed session 10. Jul 2 08:08:57.834578 systemd[1]: Started cri-containerd-2e51deb75e01116a9657a346d98badf1509134059cc30c7a5a1883b80e4a04c4.scope - libcontainer container 2e51deb75e01116a9657a346d98badf1509134059cc30c7a5a1883b80e4a04c4. Jul 2 08:08:57.917926 containerd[1920]: time="2024-07-02T08:08:57.917854215Z" level=info msg="StartContainer for \"2e51deb75e01116a9657a346d98badf1509134059cc30c7a5a1883b80e4a04c4\" returns successfully" Jul 2 08:08:58.683610 containerd[1920]: time="2024-07-02T08:08:58.683217963Z" level=info msg="StopPodSandbox for \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\"" Jul 2 08:08:58.778592 kubelet[3344]: I0702 08:08:58.777132 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hwgh6" podStartSLOduration=37.777099459 podStartE2EDuration="37.777099459s" podCreationTimestamp="2024-07-02 08:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:08:58.0765279 +0000 UTC m=+51.604389345" watchObservedRunningTime="2024-07-02 08:08:58.777099459 +0000 UTC m=+52.304960796" Jul 2 08:08:58.814548 systemd-networkd[1842]: cali9865369c958: Gained IPv6LL Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.776 [INFO][4982] k8s.go 608: Cleaning up netns ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.777 [INFO][4982] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" iface="eth0" netns="/var/run/netns/cni-4e5a2fa3-c217-9936-4dcd-ecbcc1beebbc" Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.777 [INFO][4982] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" iface="eth0" netns="/var/run/netns/cni-4e5a2fa3-c217-9936-4dcd-ecbcc1beebbc" Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.778 [INFO][4982] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" iface="eth0" netns="/var/run/netns/cni-4e5a2fa3-c217-9936-4dcd-ecbcc1beebbc" Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.778 [INFO][4982] k8s.go 615: Releasing IP address(es) ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.779 [INFO][4982] utils.go 188: Calico CNI releasing IP address ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.841 [INFO][4989] ipam_plugin.go 411: Releasing address using handleID ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" HandleID="k8s-pod-network.a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Workload="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.842 [INFO][4989] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.842 [INFO][4989] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.858 [WARNING][4989] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" HandleID="k8s-pod-network.a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Workload="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.858 [INFO][4989] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" HandleID="k8s-pod-network.a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Workload="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.862 [INFO][4989] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:08:58.868604 containerd[1920]: 2024-07-02 08:08:58.864 [INFO][4982] k8s.go 621: Teardown processing complete. ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:08:58.873753 containerd[1920]: time="2024-07-02T08:08:58.872422348Z" level=info msg="TearDown network for sandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\" successfully" Jul 2 08:08:58.873753 containerd[1920]: time="2024-07-02T08:08:58.872519260Z" level=info msg="StopPodSandbox for \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\" returns successfully" Jul 2 08:08:58.876577 systemd[1]: run-netns-cni\x2d4e5a2fa3\x2dc217\x2d9936\x2d4dcd\x2decbcc1beebbc.mount: Deactivated successfully. Jul 2 08:08:58.881014 containerd[1920]: time="2024-07-02T08:08:58.877421812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79465c8fc4-tcgmn,Uid:66390efd-8112-4c91-bea1-23630113ea89,Namespace:calico-system,Attempt:1,}" Jul 2 08:08:59.163044 systemd-networkd[1842]: cali48a5ac4c4fd: Link UP Jul 2 08:08:59.164076 systemd-networkd[1842]: cali48a5ac4c4fd: Gained carrier Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:58.989 [INFO][5002] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0 calico-kube-controllers-79465c8fc4- calico-system 66390efd-8112-4c91-bea1-23630113ea89 831 0 2024-07-02 08:08:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79465c8fc4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-163 calico-kube-controllers-79465c8fc4-tcgmn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali48a5ac4c4fd [] []}} ContainerID="926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" Namespace="calico-system" Pod="calico-kube-controllers-79465c8fc4-tcgmn" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-" Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:58.990 [INFO][5002] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" Namespace="calico-system" Pod="calico-kube-controllers-79465c8fc4-tcgmn" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.050 [INFO][5013] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" HandleID="k8s-pod-network.926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" Workload="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.076 [INFO][5013] ipam_plugin.go 264: Auto assigning IP ContainerID="926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" HandleID="k8s-pod-network.926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" Workload="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ced0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-163", "pod":"calico-kube-controllers-79465c8fc4-tcgmn", "timestamp":"2024-07-02 08:08:59.050472241 +0000 UTC"}, Hostname:"ip-172-31-16-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.078 [INFO][5013] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.078 [INFO][5013] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.078 [INFO][5013] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-163' Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.081 [INFO][5013] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" host="ip-172-31-16-163" Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.095 [INFO][5013] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-163" Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.115 [INFO][5013] ipam.go 489: Trying affinity for 192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.118 [INFO][5013] ipam.go 155: Attempting to load block cidr=192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.125 [INFO][5013] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.125 [INFO][5013] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" host="ip-172-31-16-163" Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.131 [INFO][5013] ipam.go 1685: Creating new handle: k8s-pod-network.926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601 Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.140 [INFO][5013] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" host="ip-172-31-16-163" Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.151 [INFO][5013] ipam.go 1216: Successfully claimed IPs: [192.168.5.67/26] block=192.168.5.64/26 handle="k8s-pod-network.926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" host="ip-172-31-16-163" Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.151 [INFO][5013] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.67/26] handle="k8s-pod-network.926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" host="ip-172-31-16-163" Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.151 [INFO][5013] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:08:59.203358 containerd[1920]: 2024-07-02 08:08:59.152 [INFO][5013] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.5.67/26] IPv6=[] ContainerID="926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" HandleID="k8s-pod-network.926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" Workload="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:08:59.209633 containerd[1920]: 2024-07-02 08:08:59.155 [INFO][5002] k8s.go 386: Populated endpoint ContainerID="926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" Namespace="calico-system" Pod="calico-kube-controllers-79465c8fc4-tcgmn" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0", GenerateName:"calico-kube-controllers-79465c8fc4-", Namespace:"calico-system", SelfLink:"", UID:"66390efd-8112-4c91-bea1-23630113ea89", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79465c8fc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"", Pod:"calico-kube-controllers-79465c8fc4-tcgmn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48a5ac4c4fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:08:59.209633 containerd[1920]: 2024-07-02 08:08:59.156 [INFO][5002] k8s.go 387: Calico CNI using IPs: [192.168.5.67/32] ContainerID="926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" Namespace="calico-system" Pod="calico-kube-controllers-79465c8fc4-tcgmn" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:08:59.209633 containerd[1920]: 2024-07-02 08:08:59.156 [INFO][5002] dataplane_linux.go 68: Setting the host side veth name to cali48a5ac4c4fd ContainerID="926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" Namespace="calico-system" Pod="calico-kube-controllers-79465c8fc4-tcgmn" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:08:59.209633 containerd[1920]: 2024-07-02 08:08:59.165 [INFO][5002] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" Namespace="calico-system" Pod="calico-kube-controllers-79465c8fc4-tcgmn" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:08:59.209633 containerd[1920]: 2024-07-02 08:08:59.167 [INFO][5002] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" Namespace="calico-system" Pod="calico-kube-controllers-79465c8fc4-tcgmn" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0", GenerateName:"calico-kube-controllers-79465c8fc4-", Namespace:"calico-system", SelfLink:"", UID:"66390efd-8112-4c91-bea1-23630113ea89", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79465c8fc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601", Pod:"calico-kube-controllers-79465c8fc4-tcgmn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48a5ac4c4fd", MAC:"72:9a:78:d9:45:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:08:59.209633 containerd[1920]: 2024-07-02 08:08:59.192 [INFO][5002] k8s.go 500: Wrote updated endpoint to datastore ContainerID="926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601" Namespace="calico-system" Pod="calico-kube-controllers-79465c8fc4-tcgmn" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:08:59.275852 containerd[1920]: time="2024-07-02T08:08:59.275325470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:08:59.275852 containerd[1920]: time="2024-07-02T08:08:59.275613458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:59.275852 containerd[1920]: time="2024-07-02T08:08:59.275696198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:08:59.275852 containerd[1920]: time="2024-07-02T08:08:59.275746718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:08:59.329560 systemd[1]: Started cri-containerd-926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601.scope - libcontainer container 926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601. Jul 2 08:08:59.401141 containerd[1920]: time="2024-07-02T08:08:59.401057270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79465c8fc4-tcgmn,Uid:66390efd-8112-4c91-bea1-23630113ea89,Namespace:calico-system,Attempt:1,} returns sandbox id \"926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601\"" Jul 2 08:08:59.406465 containerd[1920]: time="2024-07-02T08:08:59.406353374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 08:08:59.680691 containerd[1920]: time="2024-07-02T08:08:59.680596036Z" level=info msg="StopPodSandbox for \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\"" Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.800 [INFO][5089] k8s.go 608: Cleaning up netns ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.801 [INFO][5089] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" iface="eth0" netns="/var/run/netns/cni-7c109db6-8df7-cfb7-8e29-cbfb6dbbee5a" Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.804 [INFO][5089] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" iface="eth0" netns="/var/run/netns/cni-7c109db6-8df7-cfb7-8e29-cbfb6dbbee5a" Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.805 [INFO][5089] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" iface="eth0" netns="/var/run/netns/cni-7c109db6-8df7-cfb7-8e29-cbfb6dbbee5a" Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.806 [INFO][5089] k8s.go 615: Releasing IP address(es) ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.806 [INFO][5089] utils.go 188: Calico CNI releasing IP address ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.868 [INFO][5096] ipam_plugin.go 411: Releasing address using handleID ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" HandleID="k8s-pod-network.9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Workload="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.871 [INFO][5096] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.871 [INFO][5096] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.889 [WARNING][5096] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" HandleID="k8s-pod-network.9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Workload="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.889 [INFO][5096] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" HandleID="k8s-pod-network.9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Workload="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.892 [INFO][5096] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:08:59.900432 containerd[1920]: 2024-07-02 08:08:59.895 [INFO][5089] k8s.go 621: Teardown processing complete. ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:08:59.905128 containerd[1920]: time="2024-07-02T08:08:59.901488101Z" level=info msg="TearDown network for sandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\" successfully" Jul 2 08:08:59.905128 containerd[1920]: time="2024-07-02T08:08:59.901541021Z" level=info msg="StopPodSandbox for \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\" returns successfully" Jul 2 08:08:59.909178 containerd[1920]: time="2024-07-02T08:08:59.905750609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x68xj,Uid:6f6df60c-e3f9-43ed-b240-8e9935f2d2eb,Namespace:calico-system,Attempt:1,}" Jul 2 08:08:59.909586 systemd[1]: run-netns-cni\x2d7c109db6\x2d8df7\x2dcfb7\x2d8e29\x2dcbfb6dbbee5a.mount: Deactivated successfully. Jul 2 08:09:00.164592 systemd-networkd[1842]: calib8271107473: Link UP Jul 2 08:09:00.166682 systemd-networkd[1842]: calib8271107473: Gained carrier Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.015 [INFO][5105] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0 csi-node-driver- calico-system 6f6df60c-e3f9-43ed-b240-8e9935f2d2eb 844 0 2024-07-02 08:08:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-16-163 csi-node-driver-x68xj eth0 default [] [] [kns.calico-system ksa.calico-system.default] calib8271107473 [] []}} ContainerID="b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" Namespace="calico-system" Pod="csi-node-driver-x68xj" WorkloadEndpoint="ip--172--31--16--163-k8s-csi--node--driver--x68xj-" Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.015 [INFO][5105] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" Namespace="calico-system" Pod="csi-node-driver-x68xj" WorkloadEndpoint="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.078 [INFO][5114] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" HandleID="k8s-pod-network.b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" Workload="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.097 [INFO][5114] ipam_plugin.go 264: Auto assigning IP ContainerID="b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" HandleID="k8s-pod-network.b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" Workload="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003162d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-163", "pod":"csi-node-driver-x68xj", "timestamp":"2024-07-02 08:09:00.07882949 +0000 UTC"}, Hostname:"ip-172-31-16-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.097 [INFO][5114] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.097 [INFO][5114] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.097 [INFO][5114] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-163' Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.100 [INFO][5114] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" host="ip-172-31-16-163" Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.111 [INFO][5114] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-163" Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.119 [INFO][5114] ipam.go 489: Trying affinity for 192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.123 [INFO][5114] ipam.go 155: Attempting to load block cidr=192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.129 [INFO][5114] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.130 [INFO][5114] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" host="ip-172-31-16-163" Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.132 [INFO][5114] ipam.go 1685: Creating new handle: k8s-pod-network.b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.138 [INFO][5114] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" host="ip-172-31-16-163" Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.151 [INFO][5114] ipam.go 1216: Successfully claimed IPs: [192.168.5.68/26] block=192.168.5.64/26 handle="k8s-pod-network.b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" host="ip-172-31-16-163" Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.151 [INFO][5114] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.68/26] handle="k8s-pod-network.b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" host="ip-172-31-16-163" Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.151 [INFO][5114] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:09:00.204756 containerd[1920]: 2024-07-02 08:09:00.151 [INFO][5114] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.5.68/26] IPv6=[] ContainerID="b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" HandleID="k8s-pod-network.b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" Workload="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:00.208717 containerd[1920]: 2024-07-02 08:09:00.156 [INFO][5105] k8s.go 386: Populated endpoint ContainerID="b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" Namespace="calico-system" Pod="csi-node-driver-x68xj" WorkloadEndpoint="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f6df60c-e3f9-43ed-b240-8e9935f2d2eb", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"", Pod:"csi-node-driver-x68xj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib8271107473", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:09:00.208717 containerd[1920]: 2024-07-02 08:09:00.156 [INFO][5105] k8s.go 387: Calico CNI using IPs: [192.168.5.68/32] ContainerID="b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" Namespace="calico-system" Pod="csi-node-driver-x68xj" WorkloadEndpoint="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:00.208717 containerd[1920]: 2024-07-02 08:09:00.156 [INFO][5105] dataplane_linux.go 68: Setting the host side veth name to calib8271107473 ContainerID="b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" Namespace="calico-system" Pod="csi-node-driver-x68xj" WorkloadEndpoint="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:00.208717 containerd[1920]: 2024-07-02 08:09:00.166 [INFO][5105] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" Namespace="calico-system" Pod="csi-node-driver-x68xj" WorkloadEndpoint="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:00.208717 containerd[1920]: 2024-07-02 08:09:00.167 [INFO][5105] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" Namespace="calico-system" Pod="csi-node-driver-x68xj" WorkloadEndpoint="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f6df60c-e3f9-43ed-b240-8e9935f2d2eb", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d", Pod:"csi-node-driver-x68xj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib8271107473", MAC:"26:98:18:80:71:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:09:00.208717 containerd[1920]: 2024-07-02 08:09:00.193 [INFO][5105] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d" Namespace="calico-system" Pod="csi-node-driver-x68xj" WorkloadEndpoint="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:00.272804 containerd[1920]: time="2024-07-02T08:09:00.271850259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:09:00.272804 containerd[1920]: time="2024-07-02T08:09:00.271983747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:09:00.272804 containerd[1920]: time="2024-07-02T08:09:00.272039799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:09:00.272804 containerd[1920]: time="2024-07-02T08:09:00.272075991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:09:00.327172 systemd[1]: Started cri-containerd-b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d.scope - libcontainer container b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d. Jul 2 08:09:00.390551 containerd[1920]: time="2024-07-02T08:09:00.390482487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x68xj,Uid:6f6df60c-e3f9-43ed-b240-8e9935f2d2eb,Namespace:calico-system,Attempt:1,} returns sandbox id \"b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d\"" Jul 2 08:09:00.925856 systemd-networkd[1842]: cali48a5ac4c4fd: Gained IPv6LL Jul 2 08:09:02.205708 systemd-networkd[1842]: calib8271107473: Gained IPv6LL Jul 2 08:09:02.836762 systemd[1]: Started sshd@10-172.31.16.163:22-139.178.89.65:33090.service - OpenSSH per-connection server daemon (139.178.89.65:33090). Jul 2 08:09:03.065147 sshd[5177]: Accepted publickey for core from 139.178.89.65 port 33090 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:03.070126 sshd[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:03.098355 systemd-logind[1911]: New session 11 of user core. Jul 2 08:09:03.104023 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 08:09:03.563664 sshd[5177]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:03.577359 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:09:03.584836 systemd[1]: sshd@10-172.31.16.163:22-139.178.89.65:33090.service: Deactivated successfully. Jul 2 08:09:03.594909 systemd-logind[1911]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:09:03.619892 systemd[1]: Started sshd@11-172.31.16.163:22-139.178.89.65:33106.service - OpenSSH per-connection server daemon (139.178.89.65:33106). Jul 2 08:09:03.623114 systemd-logind[1911]: Removed session 11. Jul 2 08:09:03.839984 sshd[5195]: Accepted publickey for core from 139.178.89.65 port 33106 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:03.843199 sshd[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:03.861120 systemd-logind[1911]: New session 12 of user core. Jul 2 08:09:03.876649 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 08:09:04.446963 sshd[5195]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:04.462662 systemd[1]: sshd@11-172.31.16.163:22-139.178.89.65:33106.service: Deactivated successfully. Jul 2 08:09:04.474701 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:09:04.479963 systemd-logind[1911]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:09:04.513274 systemd[1]: Started sshd@12-172.31.16.163:22-139.178.89.65:33110.service - OpenSSH per-connection server daemon (139.178.89.65:33110). Jul 2 08:09:04.517792 systemd-logind[1911]: Removed session 12. Jul 2 08:09:04.743930 sshd[5219]: Accepted publickey for core from 139.178.89.65 port 33110 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:04.747683 sshd[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:04.769691 systemd-logind[1911]: New session 13 of user core. Jul 2 08:09:04.777571 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 08:09:04.812892 ntpd[1902]: Listen normally on 8 vxlan.calico 192.168.5.64:123 Jul 2 08:09:04.815166 ntpd[1902]: 2 Jul 08:09:04 ntpd[1902]: Listen normally on 8 vxlan.calico 192.168.5.64:123 Jul 2 08:09:04.815166 ntpd[1902]: 2 Jul 08:09:04 ntpd[1902]: Listen normally on 9 vxlan.calico [fe80::6449:14ff:fef5:e229%4]:123 Jul 2 08:09:04.815166 ntpd[1902]: 2 Jul 08:09:04 ntpd[1902]: Listen normally on 10 cali3dfc40b37a6 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 08:09:04.815166 ntpd[1902]: 2 Jul 08:09:04 ntpd[1902]: Listen normally on 11 cali9865369c958 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 08:09:04.815166 ntpd[1902]: 2 Jul 08:09:04 ntpd[1902]: Listen normally on 12 cali48a5ac4c4fd [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 08:09:04.815166 ntpd[1902]: 2 Jul 08:09:04 ntpd[1902]: Listen normally on 13 calib8271107473 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 08:09:04.813026 ntpd[1902]: Listen normally on 9 vxlan.calico [fe80::6449:14ff:fef5:e229%4]:123 Jul 2 08:09:04.813110 ntpd[1902]: Listen normally on 10 cali3dfc40b37a6 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 08:09:04.813178 ntpd[1902]: Listen normally on 11 cali9865369c958 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 08:09:04.813276 ntpd[1902]: Listen normally on 12 cali48a5ac4c4fd [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 08:09:04.813351 ntpd[1902]: Listen normally on 13 calib8271107473 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 08:09:04.945807 containerd[1920]: time="2024-07-02T08:09:04.945707050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:09:04.948544 containerd[1920]: time="2024-07-02T08:09:04.948447598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 08:09:04.952567 containerd[1920]: time="2024-07-02T08:09:04.950881570Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:09:04.957986 containerd[1920]: time="2024-07-02T08:09:04.957774526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:09:04.963162 containerd[1920]: time="2024-07-02T08:09:04.962638810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 5.5562092s" Jul 2 08:09:04.963162 containerd[1920]: time="2024-07-02T08:09:04.962705758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 08:09:04.967553 containerd[1920]: time="2024-07-02T08:09:04.967249894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 08:09:05.031699 containerd[1920]: time="2024-07-02T08:09:05.025553394Z" level=info msg="CreateContainer within sandbox \"926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 08:09:05.135859 containerd[1920]: time="2024-07-02T08:09:05.135719935Z" level=info msg="CreateContainer within sandbox \"926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"01ddf56f741fcd9af715a972a80052df60919c46239099f9d28bc19bce6728d7\"" Jul 2 08:09:05.139920 containerd[1920]: time="2024-07-02T08:09:05.139566151Z" level=info msg="StartContainer for \"01ddf56f741fcd9af715a972a80052df60919c46239099f9d28bc19bce6728d7\"" Jul 2 08:09:05.248105 sshd[5219]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:05.253984 systemd[1]: Started cri-containerd-01ddf56f741fcd9af715a972a80052df60919c46239099f9d28bc19bce6728d7.scope - libcontainer container 01ddf56f741fcd9af715a972a80052df60919c46239099f9d28bc19bce6728d7. Jul 2 08:09:05.265020 systemd[1]: sshd@12-172.31.16.163:22-139.178.89.65:33110.service: Deactivated successfully. Jul 2 08:09:05.272909 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:09:05.276442 systemd-logind[1911]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:09:05.282960 systemd-logind[1911]: Removed session 13. Jul 2 08:09:05.453206 containerd[1920]: time="2024-07-02T08:09:05.453095264Z" level=info msg="StartContainer for \"01ddf56f741fcd9af715a972a80052df60919c46239099f9d28bc19bce6728d7\" returns successfully" Jul 2 08:09:06.172885 kubelet[3344]: I0702 08:09:06.172406 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79465c8fc4-tcgmn" podStartSLOduration=29.610791628 podStartE2EDuration="35.172379216s" podCreationTimestamp="2024-07-02 08:08:31 +0000 UTC" firstStartedPulling="2024-07-02 08:08:59.404613566 +0000 UTC m=+52.932474903" lastFinishedPulling="2024-07-02 08:09:04.966201166 +0000 UTC m=+58.494062491" observedRunningTime="2024-07-02 08:09:06.171187076 +0000 UTC m=+59.699048425" watchObservedRunningTime="2024-07-02 08:09:06.172379216 +0000 UTC m=+59.700240553" Jul 2 08:09:06.711126 containerd[1920]: time="2024-07-02T08:09:06.711068063Z" level=info msg="StopPodSandbox for \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\"" Jul 2 08:09:07.031403 containerd[1920]: time="2024-07-02T08:09:07.026975876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:09:07.047002 containerd[1920]: time="2024-07-02T08:09:07.046388084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 08:09:07.051254 containerd[1920]: time="2024-07-02T08:09:07.051158024Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:09:07.068450 containerd[1920]: time="2024-07-02T08:09:07.068168756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:09:07.072203 containerd[1920]: time="2024-07-02T08:09:07.071844788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 2.10453241s" Jul 2 08:09:07.072203 containerd[1920]: time="2024-07-02T08:09:07.071911964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 08:09:07.083521 containerd[1920]: time="2024-07-02T08:09:07.083188640Z" level=info msg="CreateContainer within sandbox \"b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 08:09:07.139970 containerd[1920]: time="2024-07-02T08:09:07.139868241Z" level=info msg="CreateContainer within sandbox \"b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"aa75f686d69e02034d3e53e339d6b4e8c46c2e45b18e0c34ecd991e6a1760203\"" Jul 2 08:09:07.143595 containerd[1920]: time="2024-07-02T08:09:07.142429545Z" level=info msg="StartContainer for \"aa75f686d69e02034d3e53e339d6b4e8c46c2e45b18e0c34ecd991e6a1760203\"" Jul 2 08:09:07.172689 containerd[1920]: 2024-07-02 08:09:06.968 [WARNING][5297] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0", GenerateName:"calico-kube-controllers-79465c8fc4-", Namespace:"calico-system", SelfLink:"", UID:"66390efd-8112-4c91-bea1-23630113ea89", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79465c8fc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601", Pod:"calico-kube-controllers-79465c8fc4-tcgmn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48a5ac4c4fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:09:07.172689 containerd[1920]: 2024-07-02 08:09:06.972 [INFO][5297] k8s.go 608: Cleaning up netns ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:09:07.172689 containerd[1920]: 2024-07-02 08:09:06.972 [INFO][5297] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" iface="eth0" netns="" Jul 2 08:09:07.172689 containerd[1920]: 2024-07-02 08:09:06.973 [INFO][5297] k8s.go 615: Releasing IP address(es) ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:09:07.172689 containerd[1920]: 2024-07-02 08:09:06.973 [INFO][5297] utils.go 188: Calico CNI releasing IP address ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:09:07.172689 containerd[1920]: 2024-07-02 08:09:07.078 [INFO][5304] ipam_plugin.go 411: Releasing address using handleID ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" HandleID="k8s-pod-network.a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Workload="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:09:07.172689 containerd[1920]: 2024-07-02 08:09:07.086 [INFO][5304] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:09:07.172689 containerd[1920]: 2024-07-02 08:09:07.087 [INFO][5304] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:09:07.172689 containerd[1920]: 2024-07-02 08:09:07.119 [WARNING][5304] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" HandleID="k8s-pod-network.a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Workload="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:09:07.172689 containerd[1920]: 2024-07-02 08:09:07.119 [INFO][5304] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" HandleID="k8s-pod-network.a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Workload="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:09:07.172689 containerd[1920]: 2024-07-02 08:09:07.136 [INFO][5304] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:09:07.172689 containerd[1920]: 2024-07-02 08:09:07.144 [INFO][5297] k8s.go 621: Teardown processing complete. ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:09:07.172689 containerd[1920]: time="2024-07-02T08:09:07.172123485Z" level=info msg="TearDown network for sandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\" successfully" Jul 2 08:09:07.172689 containerd[1920]: time="2024-07-02T08:09:07.172181637Z" level=info msg="StopPodSandbox for \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\" returns successfully" Jul 2 08:09:07.178892 containerd[1920]: time="2024-07-02T08:09:07.177854649Z" level=info msg="RemovePodSandbox for \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\"" Jul 2 08:09:07.178892 containerd[1920]: time="2024-07-02T08:09:07.177927081Z" level=info msg="Forcibly stopping sandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\"" Jul 2 08:09:07.345627 systemd[1]: Started cri-containerd-aa75f686d69e02034d3e53e339d6b4e8c46c2e45b18e0c34ecd991e6a1760203.scope - libcontainer container aa75f686d69e02034d3e53e339d6b4e8c46c2e45b18e0c34ecd991e6a1760203. Jul 2 08:09:07.605942 containerd[1920]: 2024-07-02 08:09:07.465 [WARNING][5334] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0", GenerateName:"calico-kube-controllers-79465c8fc4-", Namespace:"calico-system", SelfLink:"", UID:"66390efd-8112-4c91-bea1-23630113ea89", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79465c8fc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"926951784b2252976f1ec74827fd6a07ed4d714b0fa9d26f32014c1a742b6601", Pod:"calico-kube-controllers-79465c8fc4-tcgmn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48a5ac4c4fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:09:07.605942 containerd[1920]: 2024-07-02 08:09:07.467 [INFO][5334] k8s.go 608: Cleaning up netns ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:09:07.605942 containerd[1920]: 2024-07-02 08:09:07.467 [INFO][5334] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" iface="eth0" netns="" Jul 2 08:09:07.605942 containerd[1920]: 2024-07-02 08:09:07.468 [INFO][5334] k8s.go 615: Releasing IP address(es) ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:09:07.605942 containerd[1920]: 2024-07-02 08:09:07.468 [INFO][5334] utils.go 188: Calico CNI releasing IP address ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:09:07.605942 containerd[1920]: 2024-07-02 08:09:07.552 [INFO][5373] ipam_plugin.go 411: Releasing address using handleID ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" HandleID="k8s-pod-network.a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Workload="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:09:07.605942 containerd[1920]: 2024-07-02 08:09:07.552 [INFO][5373] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:09:07.605942 containerd[1920]: 2024-07-02 08:09:07.552 [INFO][5373] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:09:07.605942 containerd[1920]: 2024-07-02 08:09:07.576 [WARNING][5373] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" HandleID="k8s-pod-network.a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Workload="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:09:07.605942 containerd[1920]: 2024-07-02 08:09:07.576 [INFO][5373] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" HandleID="k8s-pod-network.a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Workload="ip--172--31--16--163-k8s-calico--kube--controllers--79465c8fc4--tcgmn-eth0" Jul 2 08:09:07.605942 containerd[1920]: 2024-07-02 08:09:07.581 [INFO][5373] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:09:07.605942 containerd[1920]: 2024-07-02 08:09:07.592 [INFO][5334] k8s.go 621: Teardown processing complete. ContainerID="a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a" Jul 2 08:09:07.609462 containerd[1920]: time="2024-07-02T08:09:07.605793755Z" level=info msg="TearDown network for sandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\" successfully" Jul 2 08:09:07.621383 containerd[1920]: time="2024-07-02T08:09:07.620719439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:09:07.621383 containerd[1920]: time="2024-07-02T08:09:07.621001571Z" level=info msg="RemovePodSandbox \"a8e2da79b4ce01c92c1a0da73f1b413768be618491807711277967e9b402f61a\" returns successfully" Jul 2 08:09:07.624063 containerd[1920]: time="2024-07-02T08:09:07.623192831Z" level=info msg="StopPodSandbox for \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\"" Jul 2 08:09:07.673800 containerd[1920]: time="2024-07-02T08:09:07.673703135Z" level=info msg="StartContainer for \"aa75f686d69e02034d3e53e339d6b4e8c46c2e45b18e0c34ecd991e6a1760203\" returns successfully" Jul 2 08:09:07.683323 containerd[1920]: time="2024-07-02T08:09:07.683033363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 08:09:07.906669 containerd[1920]: 2024-07-02 08:09:07.794 [WARNING][5403] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7f019231-67cc-4115-a1b0-8a99961292ba", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a", Pod:"coredns-7db6d8ff4d-8l6zv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dfc40b37a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:09:07.906669 containerd[1920]: 2024-07-02 08:09:07.795 [INFO][5403] k8s.go 608: Cleaning up netns ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:09:07.906669 containerd[1920]: 2024-07-02 08:09:07.795 [INFO][5403] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" iface="eth0" netns="" Jul 2 08:09:07.906669 containerd[1920]: 2024-07-02 08:09:07.795 [INFO][5403] k8s.go 615: Releasing IP address(es) ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:09:07.906669 containerd[1920]: 2024-07-02 08:09:07.795 [INFO][5403] utils.go 188: Calico CNI releasing IP address ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:09:07.906669 containerd[1920]: 2024-07-02 08:09:07.877 [INFO][5413] ipam_plugin.go 411: Releasing address using handleID ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" HandleID="k8s-pod-network.e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:09:07.906669 containerd[1920]: 2024-07-02 08:09:07.878 [INFO][5413] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:09:07.906669 containerd[1920]: 2024-07-02 08:09:07.878 [INFO][5413] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:09:07.906669 containerd[1920]: 2024-07-02 08:09:07.898 [WARNING][5413] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" HandleID="k8s-pod-network.e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:09:07.906669 containerd[1920]: 2024-07-02 08:09:07.898 [INFO][5413] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" HandleID="k8s-pod-network.e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:09:07.906669 containerd[1920]: 2024-07-02 08:09:07.901 [INFO][5413] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:09:07.906669 containerd[1920]: 2024-07-02 08:09:07.904 [INFO][5403] k8s.go 621: Teardown processing complete. ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:09:07.909217 containerd[1920]: time="2024-07-02T08:09:07.907422120Z" level=info msg="TearDown network for sandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\" successfully" Jul 2 08:09:07.909217 containerd[1920]: time="2024-07-02T08:09:07.907488792Z" level=info msg="StopPodSandbox for \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\" returns successfully" Jul 2 08:09:07.911111 containerd[1920]: time="2024-07-02T08:09:07.910723213Z" level=info msg="RemovePodSandbox for \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\"" Jul 2 08:09:07.911111 containerd[1920]: time="2024-07-02T08:09:07.910783969Z" level=info msg="Forcibly stopping sandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\"" Jul 2 08:09:08.113438 containerd[1920]: 2024-07-02 08:09:08.024 [WARNING][5431] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7f019231-67cc-4115-a1b0-8a99961292ba", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"57f6837e82b8f956affd8919e11cce30e3ef199e58e11d3b44a93f3244fc348a", Pod:"coredns-7db6d8ff4d-8l6zv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dfc40b37a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:09:08.113438 containerd[1920]: 2024-07-02 08:09:08.024 [INFO][5431] k8s.go 608: Cleaning up netns ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:09:08.113438 containerd[1920]: 2024-07-02 08:09:08.024 [INFO][5431] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" iface="eth0" netns="" Jul 2 08:09:08.113438 containerd[1920]: 2024-07-02 08:09:08.024 [INFO][5431] k8s.go 615: Releasing IP address(es) ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:09:08.113438 containerd[1920]: 2024-07-02 08:09:08.025 [INFO][5431] utils.go 188: Calico CNI releasing IP address ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:09:08.113438 containerd[1920]: 2024-07-02 08:09:08.084 [INFO][5438] ipam_plugin.go 411: Releasing address using handleID ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" HandleID="k8s-pod-network.e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:09:08.113438 containerd[1920]: 2024-07-02 08:09:08.084 [INFO][5438] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:09:08.113438 containerd[1920]: 2024-07-02 08:09:08.084 [INFO][5438] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:09:08.113438 containerd[1920]: 2024-07-02 08:09:08.103 [WARNING][5438] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" HandleID="k8s-pod-network.e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:09:08.113438 containerd[1920]: 2024-07-02 08:09:08.104 [INFO][5438] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" HandleID="k8s-pod-network.e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--8l6zv-eth0" Jul 2 08:09:08.113438 containerd[1920]: 2024-07-02 08:09:08.106 [INFO][5438] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:09:08.113438 containerd[1920]: 2024-07-02 08:09:08.109 [INFO][5431] k8s.go 621: Teardown processing complete. ContainerID="e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176" Jul 2 08:09:08.114445 containerd[1920]: time="2024-07-02T08:09:08.113751490Z" level=info msg="TearDown network for sandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\" successfully" Jul 2 08:09:08.123866 containerd[1920]: time="2024-07-02T08:09:08.123739462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:09:08.124343 containerd[1920]: time="2024-07-02T08:09:08.123942622Z" level=info msg="RemovePodSandbox \"e0422a5139a47e0c294682897fc4c5031275c785b0ea79cde047f53a7d38c176\" returns successfully" Jul 2 08:09:08.125533 containerd[1920]: time="2024-07-02T08:09:08.124959514Z" level=info msg="StopPodSandbox for \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\"" Jul 2 08:09:08.344752 containerd[1920]: 2024-07-02 08:09:08.241 [WARNING][5456] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f6df60c-e3f9-43ed-b240-8e9935f2d2eb", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d", Pod:"csi-node-driver-x68xj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib8271107473", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:09:08.344752 containerd[1920]: 2024-07-02 08:09:08.245 [INFO][5456] k8s.go 608: Cleaning up netns ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:09:08.344752 containerd[1920]: 2024-07-02 08:09:08.247 [INFO][5456] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" iface="eth0" netns="" Jul 2 08:09:08.344752 containerd[1920]: 2024-07-02 08:09:08.247 [INFO][5456] k8s.go 615: Releasing IP address(es) ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:09:08.344752 containerd[1920]: 2024-07-02 08:09:08.248 [INFO][5456] utils.go 188: Calico CNI releasing IP address ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:09:08.344752 containerd[1920]: 2024-07-02 08:09:08.312 [INFO][5463] ipam_plugin.go 411: Releasing address using handleID ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" HandleID="k8s-pod-network.9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Workload="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:08.344752 containerd[1920]: 2024-07-02 08:09:08.313 [INFO][5463] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:09:08.344752 containerd[1920]: 2024-07-02 08:09:08.313 [INFO][5463] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:09:08.344752 containerd[1920]: 2024-07-02 08:09:08.330 [WARNING][5463] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" HandleID="k8s-pod-network.9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Workload="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:08.344752 containerd[1920]: 2024-07-02 08:09:08.330 [INFO][5463] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" HandleID="k8s-pod-network.9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Workload="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:08.344752 containerd[1920]: 2024-07-02 08:09:08.336 [INFO][5463] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:09:08.344752 containerd[1920]: 2024-07-02 08:09:08.340 [INFO][5456] k8s.go 621: Teardown processing complete. ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:09:08.349363 containerd[1920]: time="2024-07-02T08:09:08.345405407Z" level=info msg="TearDown network for sandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\" successfully" Jul 2 08:09:08.349363 containerd[1920]: time="2024-07-02T08:09:08.345598655Z" level=info msg="StopPodSandbox for \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\" returns successfully" Jul 2 08:09:08.349363 containerd[1920]: time="2024-07-02T08:09:08.346332011Z" level=info msg="RemovePodSandbox for \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\"" Jul 2 08:09:08.349363 containerd[1920]: time="2024-07-02T08:09:08.346390739Z" level=info msg="Forcibly stopping sandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\"" Jul 2 08:09:08.551863 containerd[1920]: 2024-07-02 08:09:08.457 [WARNING][5481] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f6df60c-e3f9-43ed-b240-8e9935f2d2eb", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d", Pod:"csi-node-driver-x68xj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib8271107473", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:09:08.551863 containerd[1920]: 2024-07-02 08:09:08.458 [INFO][5481] k8s.go 608: Cleaning up netns ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:09:08.551863 containerd[1920]: 2024-07-02 08:09:08.458 [INFO][5481] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" iface="eth0" netns="" Jul 2 08:09:08.551863 containerd[1920]: 2024-07-02 08:09:08.458 [INFO][5481] k8s.go 615: Releasing IP address(es) ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:09:08.551863 containerd[1920]: 2024-07-02 08:09:08.458 [INFO][5481] utils.go 188: Calico CNI releasing IP address ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:09:08.551863 containerd[1920]: 2024-07-02 08:09:08.522 [INFO][5487] ipam_plugin.go 411: Releasing address using handleID ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" HandleID="k8s-pod-network.9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Workload="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:08.551863 containerd[1920]: 2024-07-02 08:09:08.522 [INFO][5487] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:09:08.551863 containerd[1920]: 2024-07-02 08:09:08.522 [INFO][5487] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:09:08.551863 containerd[1920]: 2024-07-02 08:09:08.540 [WARNING][5487] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" HandleID="k8s-pod-network.9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Workload="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:08.551863 containerd[1920]: 2024-07-02 08:09:08.540 [INFO][5487] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" HandleID="k8s-pod-network.9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Workload="ip--172--31--16--163-k8s-csi--node--driver--x68xj-eth0" Jul 2 08:09:08.551863 containerd[1920]: 2024-07-02 08:09:08.543 [INFO][5487] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:09:08.551863 containerd[1920]: 2024-07-02 08:09:08.547 [INFO][5481] k8s.go 621: Teardown processing complete. ContainerID="9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d" Jul 2 08:09:08.551863 containerd[1920]: time="2024-07-02T08:09:08.551524656Z" level=info msg="TearDown network for sandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\" successfully" Jul 2 08:09:08.557788 containerd[1920]: time="2024-07-02T08:09:08.557598264Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:09:08.557788 containerd[1920]: time="2024-07-02T08:09:08.557708808Z" level=info msg="RemovePodSandbox \"9a54b190a138ef1f827f760ca4f11f5813fe2e2e670c331040ef5444f72a876d\" returns successfully" Jul 2 08:09:08.558868 containerd[1920]: time="2024-07-02T08:09:08.558542244Z" level=info msg="StopPodSandbox for \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\"" Jul 2 08:09:08.754860 containerd[1920]: 2024-07-02 08:09:08.656 [WARNING][5506] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3c33e4ef-e5bf-43cc-81b9-869ef5820fac", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118", Pod:"coredns-7db6d8ff4d-hwgh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9865369c958", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:09:08.754860 containerd[1920]: 2024-07-02 08:09:08.657 [INFO][5506] k8s.go 608: Cleaning up netns ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:09:08.754860 containerd[1920]: 2024-07-02 08:09:08.657 [INFO][5506] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" iface="eth0" netns="" Jul 2 08:09:08.754860 containerd[1920]: 2024-07-02 08:09:08.657 [INFO][5506] k8s.go 615: Releasing IP address(es) ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:09:08.754860 containerd[1920]: 2024-07-02 08:09:08.657 [INFO][5506] utils.go 188: Calico CNI releasing IP address ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:09:08.754860 containerd[1920]: 2024-07-02 08:09:08.716 [INFO][5512] ipam_plugin.go 411: Releasing address using handleID ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" HandleID="k8s-pod-network.28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:09:08.754860 containerd[1920]: 2024-07-02 08:09:08.716 [INFO][5512] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:09:08.754860 containerd[1920]: 2024-07-02 08:09:08.716 [INFO][5512] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:09:08.754860 containerd[1920]: 2024-07-02 08:09:08.742 [WARNING][5512] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" HandleID="k8s-pod-network.28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:09:08.754860 containerd[1920]: 2024-07-02 08:09:08.742 [INFO][5512] ipam_plugin.go 439: Releasing address using workloadID ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" HandleID="k8s-pod-network.28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:09:08.754860 containerd[1920]: 2024-07-02 08:09:08.746 [INFO][5512] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:09:08.754860 containerd[1920]: 2024-07-02 08:09:08.749 [INFO][5506] k8s.go 621: Teardown processing complete. ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:09:08.754860 containerd[1920]: time="2024-07-02T08:09:08.754653841Z" level=info msg="TearDown network for sandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\" successfully" Jul 2 08:09:08.754860 containerd[1920]: time="2024-07-02T08:09:08.754696681Z" level=info msg="StopPodSandbox for \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\" returns successfully" Jul 2 08:09:08.757165 containerd[1920]: time="2024-07-02T08:09:08.756507001Z" level=info msg="RemovePodSandbox for \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\"" Jul 2 08:09:08.757165 containerd[1920]: time="2024-07-02T08:09:08.756573193Z" level=info msg="Forcibly stopping sandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\"" Jul 2 08:09:08.997648 containerd[1920]: 2024-07-02 08:09:08.862 [WARNING][5530] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3c33e4ef-e5bf-43cc-81b9-869ef5820fac", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"f14a072bfce08913b68577d7b7618fc58894d35638d0c814f083a7bc2d5ba118", Pod:"coredns-7db6d8ff4d-hwgh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9865369c958", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:09:08.997648 containerd[1920]: 2024-07-02 08:09:08.862 [INFO][5530] k8s.go 608: Cleaning up netns ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:09:08.997648 containerd[1920]: 2024-07-02 08:09:08.862 [INFO][5530] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" iface="eth0" netns="" Jul 2 08:09:08.997648 containerd[1920]: 2024-07-02 08:09:08.862 [INFO][5530] k8s.go 615: Releasing IP address(es) ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:09:08.997648 containerd[1920]: 2024-07-02 08:09:08.862 [INFO][5530] utils.go 188: Calico CNI releasing IP address ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:09:08.997648 containerd[1920]: 2024-07-02 08:09:08.954 [INFO][5536] ipam_plugin.go 411: Releasing address using handleID ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" HandleID="k8s-pod-network.28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:09:08.997648 containerd[1920]: 2024-07-02 08:09:08.955 [INFO][5536] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:09:08.997648 containerd[1920]: 2024-07-02 08:09:08.955 [INFO][5536] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:09:08.997648 containerd[1920]: 2024-07-02 08:09:08.981 [WARNING][5536] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" HandleID="k8s-pod-network.28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:09:08.997648 containerd[1920]: 2024-07-02 08:09:08.982 [INFO][5536] ipam_plugin.go 439: Releasing address using workloadID ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" HandleID="k8s-pod-network.28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Workload="ip--172--31--16--163-k8s-coredns--7db6d8ff4d--hwgh6-eth0" Jul 2 08:09:08.997648 containerd[1920]: 2024-07-02 08:09:08.986 [INFO][5536] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:09:08.997648 containerd[1920]: 2024-07-02 08:09:08.993 [INFO][5530] k8s.go 621: Teardown processing complete. ContainerID="28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f" Jul 2 08:09:08.999495 containerd[1920]: time="2024-07-02T08:09:08.997710602Z" level=info msg="TearDown network for sandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\" successfully" Jul 2 08:09:09.012563 containerd[1920]: time="2024-07-02T08:09:09.011523562Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:09:09.013578 containerd[1920]: time="2024-07-02T08:09:09.013125838Z" level=info msg="RemovePodSandbox \"28f13c090e1f1115c0c0dfc33f3bfd3263f386b00fd7a2f76255690278af526f\" returns successfully" Jul 2 08:09:09.352400 containerd[1920]: time="2024-07-02T08:09:09.350882352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:09:09.355061 containerd[1920]: time="2024-07-02T08:09:09.354970764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 08:09:09.356799 containerd[1920]: time="2024-07-02T08:09:09.356726376Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:09:09.365651 containerd[1920]: time="2024-07-02T08:09:09.365582136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:09:09.369713 containerd[1920]: time="2024-07-02T08:09:09.369632304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.686526125s" Jul 2 08:09:09.371122 containerd[1920]: time="2024-07-02T08:09:09.371065896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 08:09:09.380118 containerd[1920]: time="2024-07-02T08:09:09.380018904Z" level=info msg="CreateContainer within sandbox \"b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 08:09:09.431636 containerd[1920]: time="2024-07-02T08:09:09.430175664Z" level=info msg="CreateContainer within sandbox \"b1efaf4321dfbb2af85fdca7666931bb6851e7eb391bff17b4740f8d87cfb23d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"90435c12afac94645e93f139ab7cc3ad7fa7a1168ae0c71aa6d9d67ef70df1e0\"" Jul 2 08:09:09.434099 containerd[1920]: time="2024-07-02T08:09:09.433322208Z" level=info msg="StartContainer for \"90435c12afac94645e93f139ab7cc3ad7fa7a1168ae0c71aa6d9d67ef70df1e0\"" Jul 2 08:09:09.533586 systemd[1]: Started cri-containerd-90435c12afac94645e93f139ab7cc3ad7fa7a1168ae0c71aa6d9d67ef70df1e0.scope - libcontainer container 90435c12afac94645e93f139ab7cc3ad7fa7a1168ae0c71aa6d9d67ef70df1e0. Jul 2 08:09:09.667306 containerd[1920]: time="2024-07-02T08:09:09.667158793Z" level=info msg="StartContainer for \"90435c12afac94645e93f139ab7cc3ad7fa7a1168ae0c71aa6d9d67ef70df1e0\" returns successfully" Jul 2 08:09:09.908132 kubelet[3344]: I0702 08:09:09.908093 3344 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 08:09:09.908934 kubelet[3344]: I0702 08:09:09.908912 3344 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 08:09:10.291967 systemd[1]: Started sshd@13-172.31.16.163:22-139.178.89.65:52296.service - OpenSSH per-connection server daemon (139.178.89.65:52296). Jul 2 08:09:10.302517 kubelet[3344]: I0702 08:09:10.302343 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-x68xj" podStartSLOduration=30.321847539 podStartE2EDuration="39.302288652s" podCreationTimestamp="2024-07-02 08:08:31 +0000 UTC" firstStartedPulling="2024-07-02 08:09:00.393165555 +0000 UTC m=+53.921026880" lastFinishedPulling="2024-07-02 08:09:09.373606668 +0000 UTC m=+62.901467993" observedRunningTime="2024-07-02 08:09:10.296472612 +0000 UTC m=+63.824333937" watchObservedRunningTime="2024-07-02 08:09:10.302288652 +0000 UTC m=+63.830150013" Jul 2 08:09:10.478274 sshd[5585]: Accepted publickey for core from 139.178.89.65 port 52296 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:10.481706 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:10.491877 systemd-logind[1911]: New session 14 of user core. Jul 2 08:09:10.497628 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 08:09:10.768594 sshd[5585]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:10.777980 systemd[1]: sshd@13-172.31.16.163:22-139.178.89.65:52296.service: Deactivated successfully. Jul 2 08:09:10.786271 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:09:10.787889 systemd-logind[1911]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:09:10.789731 systemd-logind[1911]: Removed session 14. Jul 2 08:09:15.815814 systemd[1]: Started sshd@14-172.31.16.163:22-139.178.89.65:52306.service - OpenSSH per-connection server daemon (139.178.89.65:52306). Jul 2 08:09:16.001016 sshd[5628]: Accepted publickey for core from 139.178.89.65 port 52306 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:16.003853 sshd[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:16.012389 systemd-logind[1911]: New session 15 of user core. Jul 2 08:09:16.021699 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 08:09:16.273421 sshd[5628]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:16.280823 systemd[1]: sshd@14-172.31.16.163:22-139.178.89.65:52306.service: Deactivated successfully. Jul 2 08:09:16.287489 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:09:16.289870 systemd-logind[1911]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:09:16.292734 systemd-logind[1911]: Removed session 15. Jul 2 08:09:21.318806 systemd[1]: Started sshd@15-172.31.16.163:22-139.178.89.65:58824.service - OpenSSH per-connection server daemon (139.178.89.65:58824). Jul 2 08:09:21.508484 sshd[5664]: Accepted publickey for core from 139.178.89.65 port 58824 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:21.511713 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:21.522573 systemd-logind[1911]: New session 16 of user core. Jul 2 08:09:21.527542 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 08:09:21.796201 sshd[5664]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:21.802445 systemd-logind[1911]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:09:21.803214 systemd[1]: sshd@15-172.31.16.163:22-139.178.89.65:58824.service: Deactivated successfully. Jul 2 08:09:21.809315 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:09:21.814043 systemd-logind[1911]: Removed session 16. Jul 2 08:09:26.840785 systemd[1]: Started sshd@16-172.31.16.163:22-139.178.89.65:58830.service - OpenSSH per-connection server daemon (139.178.89.65:58830). Jul 2 08:09:27.032659 sshd[5703]: Accepted publickey for core from 139.178.89.65 port 58830 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:27.037406 sshd[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:27.048180 systemd-logind[1911]: New session 17 of user core. Jul 2 08:09:27.056168 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 08:09:27.377600 sshd[5703]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:27.387207 systemd[1]: sshd@16-172.31.16.163:22-139.178.89.65:58830.service: Deactivated successfully. Jul 2 08:09:27.397000 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:09:27.403715 systemd-logind[1911]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:09:27.442785 systemd[1]: Started sshd@17-172.31.16.163:22-139.178.89.65:58840.service - OpenSSH per-connection server daemon (139.178.89.65:58840). Jul 2 08:09:27.446504 systemd-logind[1911]: Removed session 17. Jul 2 08:09:27.643419 sshd[5716]: Accepted publickey for core from 139.178.89.65 port 58840 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:27.646094 sshd[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:27.660677 systemd-logind[1911]: New session 18 of user core. Jul 2 08:09:27.669382 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 08:09:28.213488 sshd[5716]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:28.228483 systemd[1]: sshd@17-172.31.16.163:22-139.178.89.65:58840.service: Deactivated successfully. Jul 2 08:09:28.241143 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:09:28.244921 systemd-logind[1911]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:09:28.278757 systemd[1]: Started sshd@18-172.31.16.163:22-139.178.89.65:42562.service - OpenSSH per-connection server daemon (139.178.89.65:42562). Jul 2 08:09:28.286936 systemd-logind[1911]: Removed session 18. Jul 2 08:09:28.487615 sshd[5727]: Accepted publickey for core from 139.178.89.65 port 42562 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:28.493116 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:28.508208 systemd-logind[1911]: New session 19 of user core. Jul 2 08:09:28.516264 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 08:09:29.281607 kubelet[3344]: I0702 08:09:29.281496 3344 topology_manager.go:215] "Topology Admit Handler" podUID="33c99542-f9fe-4ada-b5e2-2d578a4ec1b5" podNamespace="calico-apiserver" podName="calico-apiserver-594cd947df-nspp2" Jul 2 08:09:29.298287 systemd[1]: Created slice kubepods-besteffort-pod33c99542_f9fe_4ada_b5e2_2d578a4ec1b5.slice - libcontainer container kubepods-besteffort-pod33c99542_f9fe_4ada_b5e2_2d578a4ec1b5.slice. Jul 2 08:09:29.381699 kubelet[3344]: I0702 08:09:29.381645 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/33c99542-f9fe-4ada-b5e2-2d578a4ec1b5-calico-apiserver-certs\") pod \"calico-apiserver-594cd947df-nspp2\" (UID: \"33c99542-f9fe-4ada-b5e2-2d578a4ec1b5\") " pod="calico-apiserver/calico-apiserver-594cd947df-nspp2" Jul 2 08:09:29.382417 kubelet[3344]: I0702 08:09:29.382364 3344 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9whg\" (UniqueName: \"kubernetes.io/projected/33c99542-f9fe-4ada-b5e2-2d578a4ec1b5-kube-api-access-m9whg\") pod \"calico-apiserver-594cd947df-nspp2\" (UID: \"33c99542-f9fe-4ada-b5e2-2d578a4ec1b5\") " pod="calico-apiserver/calico-apiserver-594cd947df-nspp2" Jul 2 08:09:29.483863 kubelet[3344]: E0702 08:09:29.483794 3344 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 08:09:29.484030 kubelet[3344]: E0702 08:09:29.483904 3344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33c99542-f9fe-4ada-b5e2-2d578a4ec1b5-calico-apiserver-certs podName:33c99542-f9fe-4ada-b5e2-2d578a4ec1b5 nodeName:}" failed. No retries permitted until 2024-07-02 08:09:29.983875648 +0000 UTC m=+83.511736985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/33c99542-f9fe-4ada-b5e2-2d578a4ec1b5-calico-apiserver-certs") pod "calico-apiserver-594cd947df-nspp2" (UID: "33c99542-f9fe-4ada-b5e2-2d578a4ec1b5") : secret "calico-apiserver-certs" not found Jul 2 08:09:29.996270 kubelet[3344]: E0702 08:09:29.996180 3344 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 08:09:29.996441 kubelet[3344]: E0702 08:09:29.996337 3344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33c99542-f9fe-4ada-b5e2-2d578a4ec1b5-calico-apiserver-certs podName:33c99542-f9fe-4ada-b5e2-2d578a4ec1b5 nodeName:}" failed. No retries permitted until 2024-07-02 08:09:30.996311326 +0000 UTC m=+84.524172651 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/33c99542-f9fe-4ada-b5e2-2d578a4ec1b5-calico-apiserver-certs") pod "calico-apiserver-594cd947df-nspp2" (UID: "33c99542-f9fe-4ada-b5e2-2d578a4ec1b5") : secret "calico-apiserver-certs" not found Jul 2 08:09:31.106370 containerd[1920]: time="2024-07-02T08:09:31.106300832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-594cd947df-nspp2,Uid:33c99542-f9fe-4ada-b5e2-2d578a4ec1b5,Namespace:calico-apiserver,Attempt:0,}" Jul 2 08:09:31.605131 systemd-networkd[1842]: caliab429feb889: Link UP Jul 2 08:09:31.605558 systemd-networkd[1842]: caliab429feb889: Gained carrier Jul 2 08:09:31.619664 (udev-worker)[5767]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.302 [INFO][5749] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0 calico-apiserver-594cd947df- calico-apiserver 33c99542-f9fe-4ada-b5e2-2d578a4ec1b5 1047 0 2024-07-02 08:09:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:594cd947df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-163 calico-apiserver-594cd947df-nspp2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliab429feb889 [] []}} ContainerID="efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" Namespace="calico-apiserver" Pod="calico-apiserver-594cd947df-nspp2" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-" Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.303 [INFO][5749] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" Namespace="calico-apiserver" Pod="calico-apiserver-594cd947df-nspp2" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0" Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.423 [INFO][5760] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" HandleID="k8s-pod-network.efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" Workload="ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0" Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.479 [INFO][5760] ipam_plugin.go 264: Auto assigning IP ContainerID="efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" HandleID="k8s-pod-network.efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" Workload="ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cd70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-163", "pod":"calico-apiserver-594cd947df-nspp2", "timestamp":"2024-07-02 08:09:31.423303765 +0000 UTC"}, Hostname:"ip-172-31-16-163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.479 [INFO][5760] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.479 [INFO][5760] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.479 [INFO][5760] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-163' Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.483 [INFO][5760] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" host="ip-172-31-16-163" Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.500 [INFO][5760] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-163" Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.524 [INFO][5760] ipam.go 489: Trying affinity for 192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.529 [INFO][5760] ipam.go 155: Attempting to load block cidr=192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.540 [INFO][5760] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ip-172-31-16-163" Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.540 [INFO][5760] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" host="ip-172-31-16-163" Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.549 [INFO][5760] ipam.go 1685: Creating new handle: k8s-pod-network.efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.562 [INFO][5760] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" host="ip-172-31-16-163" Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.589 [INFO][5760] ipam.go 1216: Successfully claimed IPs: [192.168.5.69/26] block=192.168.5.64/26 handle="k8s-pod-network.efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" host="ip-172-31-16-163" Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.589 [INFO][5760] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.69/26] handle="k8s-pod-network.efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" host="ip-172-31-16-163" Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.589 [INFO][5760] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:09:31.658646 containerd[1920]: 2024-07-02 08:09:31.589 [INFO][5760] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.5.69/26] IPv6=[] ContainerID="efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" HandleID="k8s-pod-network.efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" Workload="ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0" Jul 2 08:09:31.663061 containerd[1920]: 2024-07-02 08:09:31.598 [INFO][5749] k8s.go 386: Populated endpoint ContainerID="efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" Namespace="calico-apiserver" Pod="calico-apiserver-594cd947df-nspp2" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0", GenerateName:"calico-apiserver-594cd947df-", Namespace:"calico-apiserver", SelfLink:"", UID:"33c99542-f9fe-4ada-b5e2-2d578a4ec1b5", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"594cd947df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"", Pod:"calico-apiserver-594cd947df-nspp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab429feb889", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:09:31.663061 containerd[1920]: 2024-07-02 08:09:31.599 [INFO][5749] k8s.go 387: Calico CNI using IPs: [192.168.5.69/32] ContainerID="efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" Namespace="calico-apiserver" Pod="calico-apiserver-594cd947df-nspp2" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0" Jul 2 08:09:31.663061 containerd[1920]: 2024-07-02 08:09:31.599 [INFO][5749] dataplane_linux.go 68: Setting the host side veth name to caliab429feb889 ContainerID="efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" Namespace="calico-apiserver" Pod="calico-apiserver-594cd947df-nspp2" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0" Jul 2 08:09:31.663061 containerd[1920]: 2024-07-02 08:09:31.609 [INFO][5749] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" Namespace="calico-apiserver" Pod="calico-apiserver-594cd947df-nspp2" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0" Jul 2 08:09:31.663061 containerd[1920]: 2024-07-02 08:09:31.614 [INFO][5749] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" Namespace="calico-apiserver" Pod="calico-apiserver-594cd947df-nspp2" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0", GenerateName:"calico-apiserver-594cd947df-", Namespace:"calico-apiserver", SelfLink:"", UID:"33c99542-f9fe-4ada-b5e2-2d578a4ec1b5", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"594cd947df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-163", ContainerID:"efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c", Pod:"calico-apiserver-594cd947df-nspp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab429feb889", MAC:"5e:73:d8:7d:bb:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:09:31.663061 containerd[1920]: 2024-07-02 08:09:31.642 [INFO][5749] k8s.go 500: Wrote updated endpoint to datastore ContainerID="efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c" Namespace="calico-apiserver" Pod="calico-apiserver-594cd947df-nspp2" WorkloadEndpoint="ip--172--31--16--163-k8s-calico--apiserver--594cd947df--nspp2-eth0" Jul 2 08:09:31.762839 containerd[1920]: time="2024-07-02T08:09:31.761644223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:09:31.762839 containerd[1920]: time="2024-07-02T08:09:31.761811047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:09:31.762839 containerd[1920]: time="2024-07-02T08:09:31.761876543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:09:31.762839 containerd[1920]: time="2024-07-02T08:09:31.761918027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:09:31.877703 systemd[1]: Started cri-containerd-efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c.scope - libcontainer container efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c. Jul 2 08:09:32.177455 containerd[1920]: time="2024-07-02T08:09:32.177250809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-594cd947df-nspp2,Uid:33c99542-f9fe-4ada-b5e2-2d578a4ec1b5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c\"" Jul 2 08:09:32.189839 containerd[1920]: time="2024-07-02T08:09:32.189206697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 08:09:32.672432 systemd-networkd[1842]: caliab429feb889: Gained IPv6LL Jul 2 08:09:32.731727 sshd[5727]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:32.743140 systemd-logind[1911]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:09:32.745010 systemd[1]: sshd@18-172.31.16.163:22-139.178.89.65:42562.service: Deactivated successfully. Jul 2 08:09:32.753041 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:09:32.753865 systemd[1]: session-19.scope: Consumed 1.137s CPU time. Jul 2 08:09:32.775416 systemd-logind[1911]: Removed session 19. Jul 2 08:09:32.786114 systemd[1]: Started sshd@19-172.31.16.163:22-139.178.89.65:42566.service - OpenSSH per-connection server daemon (139.178.89.65:42566). Jul 2 08:09:32.991752 sshd[5829]: Accepted publickey for core from 139.178.89.65 port 42566 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:32.995165 sshd[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:33.006097 systemd-logind[1911]: New session 20 of user core. Jul 2 08:09:33.011928 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 08:09:33.823673 sshd[5829]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:33.835654 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:09:33.840134 systemd[1]: sshd@19-172.31.16.163:22-139.178.89.65:42566.service: Deactivated successfully. Jul 2 08:09:33.857412 systemd-logind[1911]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:09:33.891282 systemd[1]: Started sshd@20-172.31.16.163:22-139.178.89.65:42574.service - OpenSSH per-connection server daemon (139.178.89.65:42574). Jul 2 08:09:33.896813 systemd-logind[1911]: Removed session 20. Jul 2 08:09:34.146204 sshd[5847]: Accepted publickey for core from 139.178.89.65 port 42574 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:34.150359 sshd[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:34.171818 systemd-logind[1911]: New session 21 of user core. Jul 2 08:09:34.178932 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 08:09:34.627785 sshd[5847]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:34.641980 systemd[1]: sshd@20-172.31.16.163:22-139.178.89.65:42574.service: Deactivated successfully. Jul 2 08:09:34.654709 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:09:34.663431 systemd-logind[1911]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:09:34.669739 systemd-logind[1911]: Removed session 21. Jul 2 08:09:34.813316 ntpd[1902]: Listen normally on 14 caliab429feb889 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 08:09:34.815578 ntpd[1902]: 2 Jul 08:09:34 ntpd[1902]: Listen normally on 14 caliab429feb889 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 08:09:35.629176 containerd[1920]: time="2024-07-02T08:09:35.629089790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:09:35.631883 containerd[1920]: time="2024-07-02T08:09:35.631749806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jul 2 08:09:35.633324 containerd[1920]: time="2024-07-02T08:09:35.633159962Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:09:35.641201 containerd[1920]: time="2024-07-02T08:09:35.641091218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:09:35.647683 containerd[1920]: time="2024-07-02T08:09:35.647400698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 3.458071613s" Jul 2 08:09:35.647683 containerd[1920]: time="2024-07-02T08:09:35.647475446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 08:09:35.656031 containerd[1920]: time="2024-07-02T08:09:35.655201178Z" level=info msg="CreateContainer within sandbox \"efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 08:09:35.686987 containerd[1920]: time="2024-07-02T08:09:35.686898878Z" level=info msg="CreateContainer within sandbox \"efede27579bc46dda70604404d38522654fd71e57f98e4ad20cf85d6bc909f1c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf6667513c8d9b8220910b6c969d6c9e72c1f3e891ec3ac5b2dbf73c5d876f1c\"" Jul 2 08:09:35.690161 containerd[1920]: time="2024-07-02T08:09:35.689898914Z" level=info msg="StartContainer for \"bf6667513c8d9b8220910b6c969d6c9e72c1f3e891ec3ac5b2dbf73c5d876f1c\"" Jul 2 08:09:35.821592 systemd[1]: Started cri-containerd-bf6667513c8d9b8220910b6c969d6c9e72c1f3e891ec3ac5b2dbf73c5d876f1c.scope - libcontainer container bf6667513c8d9b8220910b6c969d6c9e72c1f3e891ec3ac5b2dbf73c5d876f1c. Jul 2 08:09:35.973418 containerd[1920]: time="2024-07-02T08:09:35.973332460Z" level=info msg="StartContainer for \"bf6667513c8d9b8220910b6c969d6c9e72c1f3e891ec3ac5b2dbf73c5d876f1c\" returns successfully" Jul 2 08:09:38.125044 kubelet[3344]: I0702 08:09:38.124924 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-594cd947df-nspp2" podStartSLOduration=5.662070482 podStartE2EDuration="9.124900011s" podCreationTimestamp="2024-07-02 08:09:29 +0000 UTC" firstStartedPulling="2024-07-02 08:09:32.187206321 +0000 UTC m=+85.715067658" lastFinishedPulling="2024-07-02 08:09:35.65003585 +0000 UTC m=+89.177897187" observedRunningTime="2024-07-02 08:09:36.41294597 +0000 UTC m=+89.940807307" watchObservedRunningTime="2024-07-02 08:09:38.124900011 +0000 UTC m=+91.652761348" Jul 2 08:09:39.669771 systemd[1]: Started sshd@21-172.31.16.163:22-139.178.89.65:59710.service - OpenSSH per-connection server daemon (139.178.89.65:59710). Jul 2 08:09:39.875020 sshd[5919]: Accepted publickey for core from 139.178.89.65 port 59710 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:39.878146 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:39.891068 systemd-logind[1911]: New session 22 of user core. Jul 2 08:09:39.899601 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 08:09:40.202587 sshd[5919]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:40.213528 systemd[1]: sshd@21-172.31.16.163:22-139.178.89.65:59710.service: Deactivated successfully. Jul 2 08:09:40.220375 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 08:09:40.222554 systemd-logind[1911]: Session 22 logged out. Waiting for processes to exit. Jul 2 08:09:40.228611 systemd-logind[1911]: Removed session 22. Jul 2 08:09:45.239789 systemd[1]: Started sshd@22-172.31.16.163:22-139.178.89.65:59712.service - OpenSSH per-connection server daemon (139.178.89.65:59712). Jul 2 08:09:45.424159 sshd[5958]: Accepted publickey for core from 139.178.89.65 port 59712 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:45.426919 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:45.435316 systemd-logind[1911]: New session 23 of user core. Jul 2 08:09:45.445499 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 08:09:45.684405 sshd[5958]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:45.691013 systemd[1]: sshd@22-172.31.16.163:22-139.178.89.65:59712.service: Deactivated successfully. Jul 2 08:09:45.695069 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 08:09:45.697375 systemd-logind[1911]: Session 23 logged out. Waiting for processes to exit. Jul 2 08:09:45.700992 systemd-logind[1911]: Removed session 23. Jul 2 08:09:50.730810 systemd[1]: Started sshd@23-172.31.16.163:22-139.178.89.65:60940.service - OpenSSH per-connection server daemon (139.178.89.65:60940). Jul 2 08:09:50.912989 sshd[5997]: Accepted publickey for core from 139.178.89.65 port 60940 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:50.915653 sshd[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:50.926732 systemd-logind[1911]: New session 24 of user core. Jul 2 08:09:50.930847 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 08:09:51.182059 sshd[5997]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:51.187545 systemd[1]: sshd@23-172.31.16.163:22-139.178.89.65:60940.service: Deactivated successfully. Jul 2 08:09:51.192272 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 08:09:51.196998 systemd-logind[1911]: Session 24 logged out. Waiting for processes to exit. Jul 2 08:09:51.199297 systemd-logind[1911]: Removed session 24. Jul 2 08:09:56.225378 systemd[1]: Started sshd@24-172.31.16.163:22-139.178.89.65:60946.service - OpenSSH per-connection server daemon (139.178.89.65:60946). Jul 2 08:09:56.399453 sshd[6014]: Accepted publickey for core from 139.178.89.65 port 60946 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:09:56.402494 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:09:56.412577 systemd-logind[1911]: New session 25 of user core. Jul 2 08:09:56.423537 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 08:09:56.669251 sshd[6014]: pam_unix(sshd:session): session closed for user core Jul 2 08:09:56.676149 systemd-logind[1911]: Session 25 logged out. Waiting for processes to exit. Jul 2 08:09:56.677847 systemd[1]: sshd@24-172.31.16.163:22-139.178.89.65:60946.service: Deactivated successfully. Jul 2 08:09:56.683888 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 08:09:56.688186 systemd-logind[1911]: Removed session 25. Jul 2 08:10:01.712782 systemd[1]: Started sshd@25-172.31.16.163:22-139.178.89.65:41210.service - OpenSSH per-connection server daemon (139.178.89.65:41210). Jul 2 08:10:01.894523 sshd[6032]: Accepted publickey for core from 139.178.89.65 port 41210 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:10:01.897581 sshd[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:10:01.906721 systemd-logind[1911]: New session 26 of user core. Jul 2 08:10:01.916535 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 08:10:02.164123 sshd[6032]: pam_unix(sshd:session): session closed for user core Jul 2 08:10:02.171840 systemd[1]: sshd@25-172.31.16.163:22-139.178.89.65:41210.service: Deactivated successfully. Jul 2 08:10:02.177079 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 08:10:02.179595 systemd-logind[1911]: Session 26 logged out. Waiting for processes to exit. Jul 2 08:10:02.182324 systemd-logind[1911]: Removed session 26. Jul 2 08:10:07.205787 systemd[1]: Started sshd@26-172.31.16.163:22-139.178.89.65:41222.service - OpenSSH per-connection server daemon (139.178.89.65:41222). Jul 2 08:10:07.387914 sshd[6047]: Accepted publickey for core from 139.178.89.65 port 41222 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:10:07.391069 sshd[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:10:07.400638 systemd-logind[1911]: New session 27 of user core. Jul 2 08:10:07.409647 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 08:10:07.659752 sshd[6047]: pam_unix(sshd:session): session closed for user core Jul 2 08:10:07.666415 systemd[1]: sshd@26-172.31.16.163:22-139.178.89.65:41222.service: Deactivated successfully. Jul 2 08:10:07.671408 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 08:10:07.673076 systemd-logind[1911]: Session 27 logged out. Waiting for processes to exit. Jul 2 08:10:07.675330 systemd-logind[1911]: Removed session 27. Jul 2 08:10:12.702834 systemd[1]: Started sshd@27-172.31.16.163:22-139.178.89.65:39384.service - OpenSSH per-connection server daemon (139.178.89.65:39384). Jul 2 08:10:12.894495 sshd[6066]: Accepted publickey for core from 139.178.89.65 port 39384 ssh2: RSA SHA256:zev8WD4CKaPapZVhVIFgLFFY23WI3PrYJfjwYFJuZUY Jul 2 08:10:12.897912 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:10:12.906553 systemd-logind[1911]: New session 28 of user core. Jul 2 08:10:12.912578 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 08:10:13.163394 sshd[6066]: pam_unix(sshd:session): session closed for user core Jul 2 08:10:13.171746 systemd[1]: sshd@27-172.31.16.163:22-139.178.89.65:39384.service: Deactivated successfully. Jul 2 08:10:13.177480 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 08:10:13.179187 systemd-logind[1911]: Session 28 logged out. Waiting for processes to exit. Jul 2 08:10:13.182179 systemd-logind[1911]: Removed session 28. Jul 2 08:10:23.012715 systemd[1]: run-containerd-runc-k8s.io-01ddf56f741fcd9af715a972a80052df60919c46239099f9d28bc19bce6728d7-runc.zvEgsl.mount: Deactivated successfully. Jul 2 08:10:27.872721 systemd[1]: cri-containerd-e4cf735d1dc5ad7555dc0dd9774d508c9fa3e54c084fe66447e6f5072992830e.scope: Deactivated successfully. Jul 2 08:10:27.873869 systemd[1]: cri-containerd-e4cf735d1dc5ad7555dc0dd9774d508c9fa3e54c084fe66447e6f5072992830e.scope: Consumed 6.649s CPU time, 24.2M memory peak, 0B memory swap peak. Jul 2 08:10:27.925359 containerd[1920]: time="2024-07-02T08:10:27.924831882Z" level=info msg="shim disconnected" id=e4cf735d1dc5ad7555dc0dd9774d508c9fa3e54c084fe66447e6f5072992830e namespace=k8s.io Jul 2 08:10:27.925959 containerd[1920]: time="2024-07-02T08:10:27.925357446Z" level=warning msg="cleaning up after shim disconnected" id=e4cf735d1dc5ad7555dc0dd9774d508c9fa3e54c084fe66447e6f5072992830e namespace=k8s.io Jul 2 08:10:27.925959 containerd[1920]: time="2024-07-02T08:10:27.925409670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:27.927118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4cf735d1dc5ad7555dc0dd9774d508c9fa3e54c084fe66447e6f5072992830e-rootfs.mount: Deactivated successfully. Jul 2 08:10:28.167787 systemd[1]: cri-containerd-9a798a73736f6a022717d7a835de08a965ba01bca2bac9a21e09b6c55ee84018.scope: Deactivated successfully. Jul 2 08:10:28.168637 systemd[1]: cri-containerd-9a798a73736f6a022717d7a835de08a965ba01bca2bac9a21e09b6c55ee84018.scope: Consumed 11.852s CPU time. Jul 2 08:10:28.206867 containerd[1920]: time="2024-07-02T08:10:28.206726319Z" level=info msg="shim disconnected" id=9a798a73736f6a022717d7a835de08a965ba01bca2bac9a21e09b6c55ee84018 namespace=k8s.io Jul 2 08:10:28.207247 containerd[1920]: time="2024-07-02T08:10:28.206830059Z" level=warning msg="cleaning up after shim disconnected" id=9a798a73736f6a022717d7a835de08a965ba01bca2bac9a21e09b6c55ee84018 namespace=k8s.io Jul 2 08:10:28.207247 containerd[1920]: time="2024-07-02T08:10:28.207092055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:28.211885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a798a73736f6a022717d7a835de08a965ba01bca2bac9a21e09b6c55ee84018-rootfs.mount: Deactivated successfully. Jul 2 08:10:28.545345 kubelet[3344]: I0702 08:10:28.544386 3344 scope.go:117] "RemoveContainer" containerID="e4cf735d1dc5ad7555dc0dd9774d508c9fa3e54c084fe66447e6f5072992830e" Jul 2 08:10:28.546645 kubelet[3344]: I0702 08:10:28.546171 3344 scope.go:117] "RemoveContainer" containerID="9a798a73736f6a022717d7a835de08a965ba01bca2bac9a21e09b6c55ee84018" Jul 2 08:10:28.551019 containerd[1920]: time="2024-07-02T08:10:28.550846241Z" level=info msg="CreateContainer within sandbox \"4b895066a66a252d029258b28125ac41107b181fa1e0d5123c883c0da08104ef\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 2 08:10:28.552754 containerd[1920]: time="2024-07-02T08:10:28.552339773Z" level=info msg="CreateContainer within sandbox \"2ce88d67bb8e06e29931c15b8404a860bfdbb55629025bc05ee572309e74a9c7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 08:10:28.585524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004632336.mount: Deactivated successfully. Jul 2 08:10:28.588825 containerd[1920]: time="2024-07-02T08:10:28.588726545Z" level=info msg="CreateContainer within sandbox \"2ce88d67bb8e06e29931c15b8404a860bfdbb55629025bc05ee572309e74a9c7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"74ace047a1026bfdf197210ea936b1b65a107695f1a8b8d02fc5a20d09129348\"" Jul 2 08:10:28.590933 containerd[1920]: time="2024-07-02T08:10:28.590600657Z" level=info msg="StartContainer for \"74ace047a1026bfdf197210ea936b1b65a107695f1a8b8d02fc5a20d09129348\"" Jul 2 08:10:28.596999 containerd[1920]: time="2024-07-02T08:10:28.596848349Z" level=info msg="CreateContainer within sandbox \"4b895066a66a252d029258b28125ac41107b181fa1e0d5123c883c0da08104ef\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f2598b610d70e57b295982d3ab50991bc4d12c351a275cebb84a8796d20971b7\"" Jul 2 08:10:28.599304 containerd[1920]: time="2024-07-02T08:10:28.597822725Z" level=info msg="StartContainer for \"f2598b610d70e57b295982d3ab50991bc4d12c351a275cebb84a8796d20971b7\"" Jul 2 08:10:28.669623 systemd[1]: Started cri-containerd-74ace047a1026bfdf197210ea936b1b65a107695f1a8b8d02fc5a20d09129348.scope - libcontainer container 74ace047a1026bfdf197210ea936b1b65a107695f1a8b8d02fc5a20d09129348. Jul 2 08:10:28.672817 systemd[1]: Started cri-containerd-f2598b610d70e57b295982d3ab50991bc4d12c351a275cebb84a8796d20971b7.scope - libcontainer container f2598b610d70e57b295982d3ab50991bc4d12c351a275cebb84a8796d20971b7. Jul 2 08:10:28.748007 containerd[1920]: time="2024-07-02T08:10:28.747940278Z" level=info msg="StartContainer for \"f2598b610d70e57b295982d3ab50991bc4d12c351a275cebb84a8796d20971b7\" returns successfully" Jul 2 08:10:28.781878 containerd[1920]: time="2024-07-02T08:10:28.781783038Z" level=info msg="StartContainer for \"74ace047a1026bfdf197210ea936b1b65a107695f1a8b8d02fc5a20d09129348\" returns successfully" Jul 2 08:10:29.585953 kubelet[3344]: E0702 08:10:29.585873 3344 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-163?timeout=10s\": context deadline exceeded" Jul 2 08:10:31.224572 systemd[1]: cri-containerd-ce24221db5c3b9072872def05a6ba8da15bca8df57fc29093862925d139b3fc0.scope: Deactivated successfully. Jul 2 08:10:31.225335 systemd[1]: cri-containerd-ce24221db5c3b9072872def05a6ba8da15bca8df57fc29093862925d139b3fc0.scope: Consumed 3.144s CPU time, 16.0M memory peak, 0B memory swap peak. Jul 2 08:10:31.268070 containerd[1920]: time="2024-07-02T08:10:31.267775111Z" level=info msg="shim disconnected" id=ce24221db5c3b9072872def05a6ba8da15bca8df57fc29093862925d139b3fc0 namespace=k8s.io Jul 2 08:10:31.268070 containerd[1920]: time="2024-07-02T08:10:31.267944719Z" level=warning msg="cleaning up after shim disconnected" id=ce24221db5c3b9072872def05a6ba8da15bca8df57fc29093862925d139b3fc0 namespace=k8s.io Jul 2 08:10:31.268070 containerd[1920]: time="2024-07-02T08:10:31.268028143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:10:31.272368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce24221db5c3b9072872def05a6ba8da15bca8df57fc29093862925d139b3fc0-rootfs.mount: Deactivated successfully. Jul 2 08:10:31.564931 kubelet[3344]: I0702 08:10:31.564727 3344 scope.go:117] "RemoveContainer" containerID="ce24221db5c3b9072872def05a6ba8da15bca8df57fc29093862925d139b3fc0" Jul 2 08:10:31.569423 containerd[1920]: time="2024-07-02T08:10:31.569318324Z" level=info msg="CreateContainer within sandbox \"b6893964c12115620fbdd71be60ad9351835035dc792e9403d816df1c41d171e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 08:10:31.602065 containerd[1920]: time="2024-07-02T08:10:31.601031816Z" level=info msg="CreateContainer within sandbox \"b6893964c12115620fbdd71be60ad9351835035dc792e9403d816df1c41d171e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"660db3cf01aef1791cf760ec49fe53a380fef0cc89a8d809728b745bf4cae5f4\"" Jul 2 08:10:31.602387 containerd[1920]: time="2024-07-02T08:10:31.602317916Z" level=info msg="StartContainer for \"660db3cf01aef1791cf760ec49fe53a380fef0cc89a8d809728b745bf4cae5f4\"" Jul 2 08:10:31.604946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066139447.mount: Deactivated successfully. Jul 2 08:10:31.668543 systemd[1]: Started cri-containerd-660db3cf01aef1791cf760ec49fe53a380fef0cc89a8d809728b745bf4cae5f4.scope - libcontainer container 660db3cf01aef1791cf760ec49fe53a380fef0cc89a8d809728b745bf4cae5f4. Jul 2 08:10:31.740434 containerd[1920]: time="2024-07-02T08:10:31.739800933Z" level=info msg="StartContainer for \"660db3cf01aef1791cf760ec49fe53a380fef0cc89a8d809728b745bf4cae5f4\" returns successfully" Jul 2 08:10:39.588246 kubelet[3344]: E0702 08:10:39.587491 3344 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-163?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 08:10:43.450291 systemd[1]: run-containerd-runc-k8s.io-01ddf56f741fcd9af715a972a80052df60919c46239099f9d28bc19bce6728d7-runc.824vjQ.mount: Deactivated successfully.