Jul 2 08:58:24.195216 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 2 08:58:24.195262 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 08:58:24.195287 kernel: KASLR disabled due to lack of seed Jul 2 08:58:24.195304 kernel: efi: EFI v2.7 by EDK II Jul 2 08:58:24.195319 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Jul 2 08:58:24.195335 kernel: ACPI: Early table checksum verification disabled Jul 2 08:58:24.195352 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 2 08:58:24.195368 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 2 08:58:24.195384 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 08:58:24.195399 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 2 08:58:24.195420 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 08:58:24.195436 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 2 08:58:24.195491 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 2 08:58:24.195514 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 2 08:58:24.195533 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 08:58:24.195557 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 2 08:58:24.195575 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 2 08:58:24.195591 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 2 08:58:24.195607 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 2 08:58:24.195624 kernel: printk: bootconsole [uart0] enabled Jul 2 08:58:24.195640 kernel: NUMA: Failed to initialise from firmware Jul 2 08:58:24.195657 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 08:58:24.195673 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 2 08:58:24.195690 kernel: Zone ranges: Jul 2 08:58:24.195706 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 2 08:58:24.195722 kernel: DMA32 empty Jul 2 08:58:24.195744 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 2 08:58:24.195761 kernel: Movable zone start for each node Jul 2 08:58:24.195777 kernel: Early memory node ranges Jul 2 08:58:24.195794 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 2 08:58:24.195811 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 2 08:58:24.195827 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 2 08:58:24.195844 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 2 08:58:24.195860 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 2 08:58:24.195877 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 2 08:58:24.195894 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 2 08:58:24.195910 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 2 08:58:24.195927 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 08:58:24.195949 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 2 08:58:24.195966 kernel: psci: probing for conduit method from ACPI. Jul 2 08:58:24.195990 kernel: psci: PSCIv1.0 detected in firmware. Jul 2 08:58:24.196008 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 08:58:24.196025 kernel: psci: Trusted OS migration not required Jul 2 08:58:24.196047 kernel: psci: SMC Calling Convention v1.1 Jul 2 08:58:24.196065 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 08:58:24.196082 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 08:58:24.196100 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 08:58:24.196117 kernel: Detected PIPT I-cache on CPU0 Jul 2 08:58:24.196134 kernel: CPU features: detected: GIC system register CPU interface Jul 2 08:58:24.196152 kernel: CPU features: detected: Spectre-v2 Jul 2 08:58:24.196169 kernel: CPU features: detected: Spectre-v3a Jul 2 08:58:24.196187 kernel: CPU features: detected: Spectre-BHB Jul 2 08:58:24.196204 kernel: CPU features: detected: ARM erratum 1742098 Jul 2 08:58:24.196222 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 2 08:58:24.196244 kernel: alternatives: applying boot alternatives Jul 2 08:58:24.196265 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 08:58:24.196283 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:58:24.196301 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 08:58:24.196318 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:58:24.196336 kernel: Fallback order for Node 0: 0 Jul 2 08:58:24.196353 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 2 08:58:24.196371 kernel: Policy zone: Normal Jul 2 08:58:24.196388 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:58:24.196405 kernel: software IO TLB: area num 2. Jul 2 08:58:24.196423 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 2 08:58:24.197538 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Jul 2 08:58:24.197583 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:58:24.197603 kernel: trace event string verifier disabled Jul 2 08:58:24.197622 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 08:58:24.197642 kernel: rcu: RCU event tracing is enabled. Jul 2 08:58:24.197660 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:58:24.197678 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 08:58:24.197696 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:58:24.197714 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:58:24.197732 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:58:24.197750 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 08:58:24.197777 kernel: GICv3: 96 SPIs implemented Jul 2 08:58:24.197795 kernel: GICv3: 0 Extended SPIs implemented Jul 2 08:58:24.197812 kernel: Root IRQ handler: gic_handle_irq Jul 2 08:58:24.197830 kernel: GICv3: GICv3 features: 16 PPIs Jul 2 08:58:24.197847 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 2 08:58:24.197864 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 2 08:58:24.197882 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 08:58:24.197900 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jul 2 08:58:24.197918 kernel: GICv3: using LPI property table @0x00000004000e0000 Jul 2 08:58:24.197937 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 2 08:58:24.197954 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jul 2 08:58:24.197971 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 08:58:24.197994 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 2 08:58:24.198012 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 2 08:58:24.198030 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 2 08:58:24.198047 kernel: Console: colour dummy device 80x25 Jul 2 08:58:24.198065 kernel: printk: console [tty1] enabled Jul 2 08:58:24.198083 kernel: ACPI: Core revision 20230628 Jul 2 08:58:24.198101 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 2 08:58:24.198119 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:58:24.198137 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 08:58:24.198155 kernel: SELinux: Initializing. Jul 2 08:58:24.198178 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:58:24.198196 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:58:24.198214 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:58:24.198231 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:58:24.198249 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:58:24.198267 kernel: rcu: Max phase no-delay instances is 400. Jul 2 08:58:24.198284 kernel: Platform MSI: ITS@0x10080000 domain created Jul 2 08:58:24.198302 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 2 08:58:24.198319 kernel: Remapping and enabling EFI services. Jul 2 08:58:24.198342 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:58:24.198359 kernel: Detected PIPT I-cache on CPU1 Jul 2 08:58:24.198377 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 2 08:58:24.198395 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jul 2 08:58:24.198412 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 2 08:58:24.198430 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:58:24.198464 kernel: SMP: Total of 2 processors activated. Jul 2 08:58:24.198488 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 08:58:24.198506 kernel: CPU features: detected: 32-bit EL1 Support Jul 2 08:58:24.198531 kernel: CPU features: detected: CRC32 instructions Jul 2 08:58:24.198549 kernel: CPU: All CPU(s) started at EL1 Jul 2 08:58:24.198579 kernel: alternatives: applying system-wide alternatives Jul 2 08:58:24.198602 kernel: devtmpfs: initialized Jul 2 08:58:24.198620 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:58:24.198638 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:58:24.198656 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:58:24.198675 kernel: SMBIOS 3.0.0 present. Jul 2 08:58:24.198694 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 2 08:58:24.198717 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:58:24.198736 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 08:58:24.198754 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 08:58:24.198788 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 08:58:24.198812 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:58:24.198830 kernel: audit: type=2000 audit(0.293:1): state=initialized audit_enabled=0 res=1 Jul 2 08:58:24.198849 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:58:24.198873 kernel: cpuidle: using governor menu Jul 2 08:58:24.198892 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 08:58:24.198911 kernel: ASID allocator initialised with 65536 entries Jul 2 08:58:24.198929 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:58:24.198947 kernel: Serial: AMBA PL011 UART driver Jul 2 08:58:24.198966 kernel: Modules: 17600 pages in range for non-PLT usage Jul 2 08:58:24.198984 kernel: Modules: 509120 pages in range for PLT usage Jul 2 08:58:24.199003 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:58:24.199021 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 08:58:24.199044 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 08:58:24.199063 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 08:58:24.199081 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:58:24.199100 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 08:58:24.199118 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 08:58:24.199137 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 08:58:24.199155 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:58:24.199173 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:58:24.199192 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:58:24.199215 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:58:24.199234 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:58:24.199252 kernel: ACPI: Interpreter enabled Jul 2 08:58:24.199270 kernel: ACPI: Using GIC for interrupt routing Jul 2 08:58:24.199288 kernel: ACPI: MCFG table detected, 1 entries Jul 2 08:58:24.199306 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 2 08:58:24.200118 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:58:24.200337 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 08:58:24.200574 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 08:58:24.200772 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 2 08:58:24.200968 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 2 08:58:24.200994 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 2 08:58:24.201013 kernel: acpiphp: Slot [1] registered Jul 2 08:58:24.201031 kernel: acpiphp: Slot [2] registered Jul 2 08:58:24.201049 kernel: acpiphp: Slot [3] registered Jul 2 08:58:24.201067 kernel: acpiphp: Slot [4] registered Jul 2 08:58:24.201086 kernel: acpiphp: Slot [5] registered Jul 2 08:58:24.201111 kernel: acpiphp: Slot [6] registered Jul 2 08:58:24.201129 kernel: acpiphp: Slot [7] registered Jul 2 08:58:24.201147 kernel: acpiphp: Slot [8] registered Jul 2 08:58:24.201165 kernel: acpiphp: Slot [9] registered Jul 2 08:58:24.201183 kernel: acpiphp: Slot [10] registered Jul 2 08:58:24.201201 kernel: acpiphp: Slot [11] registered Jul 2 08:58:24.201220 kernel: acpiphp: Slot [12] registered Jul 2 08:58:24.201238 kernel: acpiphp: Slot [13] registered Jul 2 08:58:24.201256 kernel: acpiphp: Slot [14] registered Jul 2 08:58:24.201279 kernel: acpiphp: Slot [15] registered Jul 2 08:58:24.201298 kernel: acpiphp: Slot [16] registered Jul 2 08:58:24.201316 kernel: acpiphp: Slot [17] registered Jul 2 08:58:24.201334 kernel: acpiphp: Slot [18] registered Jul 2 08:58:24.201352 kernel: acpiphp: Slot [19] registered Jul 2 08:58:24.201370 kernel: acpiphp: Slot [20] registered Jul 2 08:58:24.201389 kernel: acpiphp: Slot [21] registered Jul 2 08:58:24.201407 kernel: acpiphp: Slot [22] registered Jul 2 08:58:24.201425 kernel: acpiphp: Slot [23] registered Jul 2 08:58:24.201443 kernel: acpiphp: Slot [24] registered Jul 2 08:58:24.202113 kernel: acpiphp: Slot [25] registered Jul 2 08:58:24.202134 kernel: acpiphp: Slot [26] registered Jul 2 08:58:24.202153 kernel: acpiphp: Slot [27] registered Jul 2 08:58:24.202172 kernel: acpiphp: Slot [28] registered Jul 2 08:58:24.202191 kernel: acpiphp: Slot [29] registered Jul 2 08:58:24.202209 kernel: acpiphp: Slot [30] registered Jul 2 08:58:24.202228 kernel: acpiphp: Slot [31] registered Jul 2 08:58:24.202246 kernel: PCI host bridge to bus 0000:00 Jul 2 08:58:24.202534 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 2 08:58:24.203287 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 08:58:24.203585 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 2 08:58:24.203774 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 2 08:58:24.204007 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 2 08:58:24.204246 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 2 08:58:24.204651 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 2 08:58:24.204897 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 08:58:24.205106 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 2 08:58:24.205313 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:58:24.205567 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 08:58:24.205786 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 2 08:58:24.205995 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 2 08:58:24.206202 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 2 08:58:24.206416 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:58:24.206658 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 2 08:58:24.206902 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 2 08:58:24.207209 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 2 08:58:24.207507 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 2 08:58:24.207729 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 2 08:58:24.207915 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 2 08:58:24.208105 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 08:58:24.208283 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 2 08:58:24.208308 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 08:58:24.208328 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 08:58:24.208346 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 08:58:24.208365 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 08:58:24.208383 kernel: iommu: Default domain type: Translated Jul 2 08:58:24.208401 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 08:58:24.208426 kernel: efivars: Registered efivars operations Jul 2 08:58:24.208445 kernel: vgaarb: loaded Jul 2 08:58:24.208512 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 08:58:24.208532 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:58:24.208550 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:58:24.208568 kernel: pnp: PnP ACPI init Jul 2 08:58:24.208785 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 2 08:58:24.208813 kernel: pnp: PnP ACPI: found 1 devices Jul 2 08:58:24.208838 kernel: NET: Registered PF_INET protocol family Jul 2 08:58:24.208857 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 08:58:24.208876 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 08:58:24.208894 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:58:24.208913 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:58:24.208932 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 08:58:24.208950 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 08:58:24.208969 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:58:24.208987 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:58:24.209010 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:58:24.209029 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:58:24.209047 kernel: kvm [1]: HYP mode not available Jul 2 08:58:24.209065 kernel: Initialise system trusted keyrings Jul 2 08:58:24.209084 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 08:58:24.209102 kernel: Key type asymmetric registered Jul 2 08:58:24.209120 kernel: Asymmetric key parser 'x509' registered Jul 2 08:58:24.209138 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 08:58:24.209157 kernel: io scheduler mq-deadline registered Jul 2 08:58:24.209180 kernel: io scheduler kyber registered Jul 2 08:58:24.209199 kernel: io scheduler bfq registered Jul 2 08:58:24.209407 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 2 08:58:24.209434 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 08:58:24.209471 kernel: ACPI: button: Power Button [PWRB] Jul 2 08:58:24.209492 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 2 08:58:24.209512 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 08:58:24.209531 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:58:24.209556 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 2 08:58:24.209768 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 2 08:58:24.209794 kernel: printk: console [ttyS0] disabled Jul 2 08:58:24.209816 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 2 08:58:24.209842 kernel: printk: console [ttyS0] enabled Jul 2 08:58:24.209861 kernel: printk: bootconsole [uart0] disabled Jul 2 08:58:24.209879 kernel: thunder_xcv, ver 1.0 Jul 2 08:58:24.209897 kernel: thunder_bgx, ver 1.0 Jul 2 08:58:24.209916 kernel: nicpf, ver 1.0 Jul 2 08:58:24.209934 kernel: nicvf, ver 1.0 Jul 2 08:58:24.210164 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 08:58:24.210360 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T08:58:23 UTC (1719910703) Jul 2 08:58:24.210386 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 08:58:24.210405 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 2 08:58:24.210424 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 08:58:24.210442 kernel: watchdog: Hard watchdog permanently disabled Jul 2 08:58:24.214535 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:58:24.214559 kernel: Segment Routing with IPv6 Jul 2 08:58:24.214589 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:58:24.214608 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:58:24.214626 kernel: Key type dns_resolver registered Jul 2 08:58:24.214645 kernel: registered taskstats version 1 Jul 2 08:58:24.214663 kernel: Loading compiled-in X.509 certificates Jul 2 08:58:24.214682 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 08:58:24.214700 kernel: Key type .fscrypt registered Jul 2 08:58:24.214718 kernel: Key type fscrypt-provisioning registered Jul 2 08:58:24.214736 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:58:24.214760 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:58:24.214796 kernel: ima: No architecture policies found Jul 2 08:58:24.214817 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 08:58:24.214835 kernel: clk: Disabling unused clocks Jul 2 08:58:24.214853 kernel: Freeing unused kernel memory: 39040K Jul 2 08:58:24.214872 kernel: Run /init as init process Jul 2 08:58:24.214890 kernel: with arguments: Jul 2 08:58:24.214908 kernel: /init Jul 2 08:58:24.214926 kernel: with environment: Jul 2 08:58:24.214950 kernel: HOME=/ Jul 2 08:58:24.214968 kernel: TERM=linux Jul 2 08:58:24.214986 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:58:24.215009 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:58:24.215033 systemd[1]: Detected virtualization amazon. Jul 2 08:58:24.215053 systemd[1]: Detected architecture arm64. Jul 2 08:58:24.215073 systemd[1]: Running in initrd. Jul 2 08:58:24.215092 systemd[1]: No hostname configured, using default hostname. Jul 2 08:58:24.215117 systemd[1]: Hostname set to . Jul 2 08:58:24.215138 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:58:24.215157 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:58:24.215177 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:58:24.215197 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:58:24.215218 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 08:58:24.215239 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:58:24.215265 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 08:58:24.215286 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 08:58:24.215309 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 08:58:24.215330 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 08:58:24.215350 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:58:24.215370 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:58:24.215390 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:58:24.215415 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:58:24.215435 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:58:24.215474 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:58:24.215497 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:58:24.215518 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:58:24.215538 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:58:24.215558 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:58:24.215578 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:58:24.215598 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:58:24.215626 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:58:24.215646 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:58:24.215666 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 08:58:24.215686 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:58:24.215706 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 08:58:24.215725 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:58:24.215745 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:58:24.215765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:58:24.215790 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:58:24.215811 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 08:58:24.215831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:58:24.215850 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:58:24.215919 systemd-journald[250]: Collecting audit messages is disabled. Jul 2 08:58:24.215968 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:58:24.215990 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:58:24.216010 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:58:24.216029 kernel: Bridge firewalling registered Jul 2 08:58:24.216055 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:58:24.216075 systemd-journald[250]: Journal started Jul 2 08:58:24.216115 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2016676926ce9563c7def3c6570527) is 8.0M, max 75.3M, 67.3M free. Jul 2 08:58:24.171648 systemd-modules-load[251]: Inserted module 'overlay' Jul 2 08:58:24.216208 systemd-modules-load[251]: Inserted module 'br_netfilter' Jul 2 08:58:24.229162 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:58:24.229993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:58:24.235236 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:58:24.248684 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:58:24.257018 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:58:24.261697 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:58:24.298505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:58:24.307768 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:58:24.323859 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:58:24.330524 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:58:24.342678 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 08:58:24.347989 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:58:24.381105 dracut-cmdline[288]: dracut-dracut-053 Jul 2 08:58:24.387840 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 08:58:24.411306 systemd-resolved[286]: Positive Trust Anchors: Jul 2 08:58:24.411340 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:58:24.411401 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:58:24.562491 kernel: SCSI subsystem initialized Jul 2 08:58:24.570485 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:58:24.583487 kernel: iscsi: registered transport (tcp) Jul 2 08:58:24.606006 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:58:24.606081 kernel: QLogic iSCSI HBA Driver Jul 2 08:58:24.655502 kernel: random: crng init done Jul 2 08:58:24.655690 systemd-resolved[286]: Defaulting to hostname 'linux'. Jul 2 08:58:24.658787 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:58:24.662639 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:58:24.685240 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 08:58:24.693687 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 08:58:24.737673 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:58:24.737760 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:58:24.737788 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 08:58:24.804496 kernel: raid6: neonx8 gen() 6648 MB/s Jul 2 08:58:24.821482 kernel: raid6: neonx4 gen() 6486 MB/s Jul 2 08:58:24.838482 kernel: raid6: neonx2 gen() 5427 MB/s Jul 2 08:58:24.855481 kernel: raid6: neonx1 gen() 3946 MB/s Jul 2 08:58:24.872482 kernel: raid6: int64x8 gen() 3795 MB/s Jul 2 08:58:24.889482 kernel: raid6: int64x4 gen() 3713 MB/s Jul 2 08:58:24.906481 kernel: raid6: int64x2 gen() 3594 MB/s Jul 2 08:58:24.924177 kernel: raid6: int64x1 gen() 2767 MB/s Jul 2 08:58:24.924218 kernel: raid6: using algorithm neonx8 gen() 6648 MB/s Jul 2 08:58:24.942140 kernel: raid6: .... xor() 4919 MB/s, rmw enabled Jul 2 08:58:24.942178 kernel: raid6: using neon recovery algorithm Jul 2 08:58:24.949485 kernel: xor: measuring software checksum speed Jul 2 08:58:24.951485 kernel: 8regs : 11037 MB/sec Jul 2 08:58:24.953480 kernel: 32regs : 11924 MB/sec Jul 2 08:58:24.955484 kernel: arm64_neon : 9533 MB/sec Jul 2 08:58:24.955518 kernel: xor: using function: 32regs (11924 MB/sec) Jul 2 08:58:25.040501 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 08:58:25.059127 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:58:25.070778 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:58:25.108800 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jul 2 08:58:25.117283 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:58:25.141804 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 08:58:25.170367 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Jul 2 08:58:25.229518 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:58:25.245736 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:58:25.360402 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:58:25.373824 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 08:58:25.422859 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 08:58:25.430030 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:58:25.431387 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:58:25.432020 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:58:25.452923 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 08:58:25.497000 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:58:25.543477 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 08:58:25.543544 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 2 08:58:25.582789 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 08:58:25.583058 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 08:58:25.583291 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:c1:89:73:29:35 Jul 2 08:58:25.564824 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:58:25.565043 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:58:25.568970 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:58:25.571028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:58:25.571290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:58:25.573530 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:58:25.587861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:58:25.593226 (udev-worker)[521]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:58:25.630499 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 2 08:58:25.633482 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 08:58:25.641502 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 08:58:25.647476 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:58:25.647535 kernel: GPT:9289727 != 16777215 Jul 2 08:58:25.647570 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:58:25.647693 kernel: GPT:9289727 != 16777215 Jul 2 08:58:25.648037 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:58:25.655515 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:58:25.655550 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:58:25.664734 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:58:25.693446 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:58:25.786232 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 08:58:25.804092 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (531) Jul 2 08:58:25.816494 kernel: BTRFS: device fsid ad4b0605-c88d-4cc1-aa96-32e9393058b1 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (515) Jul 2 08:58:25.884821 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 08:58:25.901680 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 08:58:25.924791 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 08:58:25.926028 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 08:58:25.943797 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 08:58:25.955097 disk-uuid[661]: Primary Header is updated. Jul 2 08:58:25.955097 disk-uuid[661]: Secondary Entries is updated. Jul 2 08:58:25.955097 disk-uuid[661]: Secondary Header is updated. Jul 2 08:58:25.965486 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:58:25.973790 kernel: GPT:disk_guids don't match. Jul 2 08:58:25.973850 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:58:25.974719 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:58:25.985478 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:58:26.986581 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 08:58:26.987587 disk-uuid[662]: The operation has completed successfully. Jul 2 08:58:27.156183 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:58:27.156665 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 08:58:27.207707 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 08:58:27.215799 sh[1003]: Success Jul 2 08:58:27.244489 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 08:58:27.339106 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 08:58:27.356675 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 08:58:27.360612 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 08:58:27.393064 kernel: BTRFS info (device dm-0): first mount of filesystem ad4b0605-c88d-4cc1-aa96-32e9393058b1 Jul 2 08:58:27.393139 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:58:27.393166 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 08:58:27.395831 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 08:58:27.395866 kernel: BTRFS info (device dm-0): using free space tree Jul 2 08:58:27.506498 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 08:58:27.537711 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 08:58:27.541606 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 08:58:27.552753 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 08:58:27.557578 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 08:58:27.581522 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:58:27.581584 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:58:27.581612 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:58:27.590505 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:58:27.609740 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:58:27.612696 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:58:27.639245 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 08:58:27.644892 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 08:58:27.749085 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:58:27.777721 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:58:27.832019 systemd-networkd[1197]: lo: Link UP Jul 2 08:58:27.832043 systemd-networkd[1197]: lo: Gained carrier Jul 2 08:58:27.835863 systemd-networkd[1197]: Enumeration completed Jul 2 08:58:27.836916 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:58:27.839538 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:58:27.839545 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:58:27.842632 systemd[1]: Reached target network.target - Network. Jul 2 08:58:27.845202 systemd-networkd[1197]: eth0: Link UP Jul 2 08:58:27.845211 systemd-networkd[1197]: eth0: Gained carrier Jul 2 08:58:27.845230 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:58:27.866543 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.26.125/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 08:58:28.002809 ignition[1111]: Ignition 2.18.0 Jul 2 08:58:28.002838 ignition[1111]: Stage: fetch-offline Jul 2 08:58:28.004357 ignition[1111]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:28.004384 ignition[1111]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:28.007366 ignition[1111]: Ignition finished successfully Jul 2 08:58:28.012653 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:58:28.022770 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 08:58:28.057581 ignition[1207]: Ignition 2.18.0 Jul 2 08:58:28.057602 ignition[1207]: Stage: fetch Jul 2 08:58:28.058184 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:28.058208 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:28.058336 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:28.079975 ignition[1207]: PUT result: OK Jul 2 08:58:28.084200 ignition[1207]: parsed url from cmdline: "" Jul 2 08:58:28.084216 ignition[1207]: no config URL provided Jul 2 08:58:28.084231 ignition[1207]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:58:28.084255 ignition[1207]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:58:28.084286 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:28.087788 ignition[1207]: PUT result: OK Jul 2 08:58:28.087867 ignition[1207]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 08:58:28.092738 ignition[1207]: GET result: OK Jul 2 08:58:28.094756 ignition[1207]: parsing config with SHA512: 1169e89586c1b1482e5ca7c07a1b9c93016f37c9982650f05e15da38e9589660470c92e0f41eec00f9b1ec7b193c978b339626fb790b08edde08ed5d54afae4b Jul 2 08:58:28.106557 unknown[1207]: fetched base config from "system" Jul 2 08:58:28.108275 unknown[1207]: fetched base config from "system" Jul 2 08:58:28.108300 unknown[1207]: fetched user config from "aws" Jul 2 08:58:28.111949 ignition[1207]: fetch: fetch complete Jul 2 08:58:28.111981 ignition[1207]: fetch: fetch passed Jul 2 08:58:28.112076 ignition[1207]: Ignition finished successfully Jul 2 08:58:28.119240 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 08:58:28.128764 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 08:58:28.156515 ignition[1214]: Ignition 2.18.0 Jul 2 08:58:28.156537 ignition[1214]: Stage: kargs Jul 2 08:58:28.157130 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:28.157154 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:28.157292 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:28.160524 ignition[1214]: PUT result: OK Jul 2 08:58:28.170274 ignition[1214]: kargs: kargs passed Jul 2 08:58:28.170413 ignition[1214]: Ignition finished successfully Jul 2 08:58:28.174661 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 08:58:28.190265 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 08:58:28.211721 ignition[1221]: Ignition 2.18.0 Jul 2 08:58:28.212210 ignition[1221]: Stage: disks Jul 2 08:58:28.212833 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:28.212857 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:28.213017 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:28.217043 ignition[1221]: PUT result: OK Jul 2 08:58:28.224771 ignition[1221]: disks: disks passed Jul 2 08:58:28.224903 ignition[1221]: Ignition finished successfully Jul 2 08:58:28.229637 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 08:58:28.231931 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 08:58:28.234988 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:58:28.241822 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:58:28.245307 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:58:28.245583 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:58:28.260999 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 08:58:28.309112 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 08:58:28.316755 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 08:58:28.330778 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 08:58:28.421504 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c1692a6b-74d8-4bda-be0c-9d706985f1ed r/w with ordered data mode. Quota mode: none. Jul 2 08:58:28.422968 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 08:58:28.426493 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 08:58:28.444635 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:58:28.454899 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 08:58:28.459095 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 08:58:28.461922 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:58:28.461973 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:58:28.481507 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1249) Jul 2 08:58:28.485020 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:58:28.485092 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:58:28.485119 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:58:28.487871 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 08:58:28.493633 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:58:28.503710 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 08:58:28.510607 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:58:28.923999 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:58:28.943257 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:58:28.951765 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:58:28.960838 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:58:29.038767 systemd-networkd[1197]: eth0: Gained IPv6LL Jul 2 08:58:29.266425 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 08:58:29.275666 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 08:58:29.279818 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 08:58:29.312541 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 08:58:29.314533 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:58:29.341634 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 08:58:29.358509 ignition[1363]: INFO : Ignition 2.18.0 Jul 2 08:58:29.358509 ignition[1363]: INFO : Stage: mount Jul 2 08:58:29.358509 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:29.358509 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:29.358509 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:29.368496 ignition[1363]: INFO : PUT result: OK Jul 2 08:58:29.371894 ignition[1363]: INFO : mount: mount passed Jul 2 08:58:29.374677 ignition[1363]: INFO : Ignition finished successfully Jul 2 08:58:29.377105 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 08:58:29.387665 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 08:58:29.432768 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:58:29.454475 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1374) Jul 2 08:58:29.459077 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 08:58:29.459124 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:58:29.459151 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 08:58:29.463492 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 08:58:29.466805 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:58:29.502672 ignition[1391]: INFO : Ignition 2.18.0 Jul 2 08:58:29.502672 ignition[1391]: INFO : Stage: files Jul 2 08:58:29.505960 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:29.505960 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:29.505960 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:29.512414 ignition[1391]: INFO : PUT result: OK Jul 2 08:58:29.517056 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:58:29.519259 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:58:29.519259 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:58:29.549180 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:58:29.551694 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:58:29.551694 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:58:29.550670 unknown[1391]: wrote ssh authorized keys file for user: core Jul 2 08:58:29.558561 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:58:29.562070 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 08:58:29.611254 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:58:29.695496 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:58:29.695496 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:58:29.703405 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jul 2 08:58:30.044066 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 08:58:30.383801 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:58:30.383801 ignition[1391]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 08:58:30.390018 ignition[1391]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:58:30.390018 ignition[1391]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:58:30.390018 ignition[1391]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 08:58:30.390018 ignition[1391]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:58:30.390018 ignition[1391]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:58:30.390018 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:58:30.390018 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:58:30.390018 ignition[1391]: INFO : files: files passed Jul 2 08:58:30.390018 ignition[1391]: INFO : Ignition finished successfully Jul 2 08:58:30.407026 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 08:58:30.436854 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 08:58:30.443710 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 08:58:30.449212 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:58:30.452608 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 08:58:30.479587 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:58:30.479587 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:58:30.486333 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:58:30.491502 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:58:30.497658 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 08:58:30.506819 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 08:58:30.556356 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:58:30.556794 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 08:58:30.564112 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 08:58:30.565977 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 08:58:30.566875 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 08:58:30.588884 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 08:58:30.617440 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:58:30.634852 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 08:58:30.660287 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:58:30.662928 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:58:30.665707 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 08:58:30.672918 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:58:30.673164 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:58:30.676131 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 08:58:30.683099 systemd[1]: Stopped target basic.target - Basic System. Jul 2 08:58:30.685897 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 08:58:30.689155 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:58:30.691775 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 08:58:30.695679 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 08:58:30.699036 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:58:30.701667 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 08:58:30.705047 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 08:58:30.707013 systemd[1]: Stopped target swap.target - Swaps. Jul 2 08:58:30.708975 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:58:30.709202 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:58:30.716153 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:58:30.718651 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:58:30.731115 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 08:58:30.734533 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:58:30.739300 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:58:30.739559 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 08:58:30.745234 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:58:30.745613 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:58:30.751958 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:58:30.752164 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 08:58:30.766960 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 08:58:30.767139 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:58:30.767384 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:58:30.775810 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 08:58:30.787280 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:58:30.787643 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:58:30.819695 ignition[1444]: INFO : Ignition 2.18.0 Jul 2 08:58:30.819695 ignition[1444]: INFO : Stage: umount Jul 2 08:58:30.819695 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:58:30.819695 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 08:58:30.819695 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 08:58:30.802220 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:58:30.850535 ignition[1444]: INFO : PUT result: OK Jul 2 08:58:30.850535 ignition[1444]: INFO : umount: umount passed Jul 2 08:58:30.850535 ignition[1444]: INFO : Ignition finished successfully Jul 2 08:58:30.802576 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:58:30.829007 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:58:30.829197 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 08:58:30.849800 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:58:30.851029 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:58:30.851249 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 08:58:30.864107 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:58:30.864593 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 08:58:30.866398 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:58:30.866528 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 08:58:30.871591 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:58:30.871676 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 08:58:30.873518 systemd[1]: Stopped target network.target - Network. Jul 2 08:58:30.875162 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:58:30.875584 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:58:30.878948 systemd[1]: Stopped target paths.target - Path Units. Jul 2 08:58:30.880587 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:58:30.883938 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:58:30.886384 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 08:58:30.889087 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 08:58:30.893874 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:58:30.893954 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:58:30.896556 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:58:30.896626 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:58:30.898527 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:58:30.898608 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 08:58:30.900506 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 08:58:30.900581 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 08:58:30.902748 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 08:58:30.904713 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 08:58:30.928697 systemd-networkd[1197]: eth0: DHCPv6 lease lost Jul 2 08:58:30.932018 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:58:30.932288 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 08:58:30.936980 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:58:30.938834 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 08:58:30.947301 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:58:30.947816 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:58:30.972776 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 08:58:30.974634 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:58:30.975069 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:58:30.988948 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:58:30.990744 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:58:30.994473 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:58:30.994567 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 08:58:30.996546 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 08:58:30.996623 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:58:31.002164 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:58:31.007025 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:58:31.007239 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 08:58:31.022977 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:58:31.023143 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 08:58:31.042340 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:58:31.045249 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:58:31.051747 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:58:31.052344 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 08:58:31.058991 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:58:31.059262 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 08:58:31.065019 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:58:31.065097 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:58:31.067015 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:58:31.067101 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:58:31.069510 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:58:31.069588 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 08:58:31.090406 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:58:31.090524 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:58:31.106884 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 08:58:31.111230 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:58:31.111347 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:58:31.114226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:58:31.114307 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:58:31.148112 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:58:31.148520 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 08:58:31.156241 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 08:58:31.173721 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 08:58:31.192835 systemd[1]: Switching root. Jul 2 08:58:31.253773 systemd-journald[250]: Journal stopped Jul 2 08:58:34.174717 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jul 2 08:58:34.174846 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:58:34.174891 kernel: SELinux: policy capability open_perms=1 Jul 2 08:58:34.174927 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:58:34.174966 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:58:34.174998 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:58:34.175029 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:58:34.175060 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:58:34.175091 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:58:34.175121 kernel: audit: type=1403 audit(1719910712.311:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:58:34.175160 systemd[1]: Successfully loaded SELinux policy in 68.996ms. Jul 2 08:58:34.175204 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.783ms. Jul 2 08:58:34.175242 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:58:34.175273 systemd[1]: Detected virtualization amazon. Jul 2 08:58:34.175304 systemd[1]: Detected architecture arm64. Jul 2 08:58:34.175335 systemd[1]: Detected first boot. Jul 2 08:58:34.175368 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:58:34.175400 zram_generator::config[1486]: No configuration found. Jul 2 08:58:34.175437 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:58:34.179742 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:58:34.179789 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 08:58:34.179830 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:58:34.179864 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 08:58:34.179913 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 08:58:34.179949 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 08:58:34.179980 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 08:58:34.180012 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 08:58:34.180044 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 08:58:34.180075 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 08:58:34.180110 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 08:58:34.180142 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:58:34.180173 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:58:34.180205 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 08:58:34.180237 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 08:58:34.180270 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 08:58:34.180300 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:58:34.180332 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 08:58:34.180362 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:58:34.180397 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 08:58:34.180427 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 08:58:34.180526 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 08:58:34.180571 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 08:58:34.180604 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:58:34.180637 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:58:34.180666 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:58:34.180695 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:58:34.180731 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 08:58:34.180763 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 08:58:34.180795 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:58:34.180825 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:58:34.180872 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:58:34.180905 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 08:58:34.180939 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 08:58:34.180973 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 08:58:34.181005 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 08:58:34.181041 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 08:58:34.181073 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 08:58:34.181103 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 08:58:34.181136 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:58:34.181165 systemd[1]: Reached target machines.target - Containers. Jul 2 08:58:34.181199 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 08:58:34.181230 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:58:34.181259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:58:34.181293 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 08:58:34.181328 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:58:34.181358 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:58:34.181389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:58:34.181420 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 08:58:34.181484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:58:34.181523 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:58:34.181553 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:58:34.181583 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 08:58:34.181620 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:58:34.181649 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:58:34.181678 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:58:34.181707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:58:34.181736 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 08:58:34.181765 kernel: fuse: init (API version 7.39) Jul 2 08:58:34.181794 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 08:58:34.181822 kernel: loop: module loaded Jul 2 08:58:34.181850 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:58:34.181885 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:58:34.181915 systemd[1]: Stopped verity-setup.service. Jul 2 08:58:34.181944 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 08:58:34.181972 kernel: ACPI: bus type drm_connector registered Jul 2 08:58:34.182002 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 08:58:34.182034 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 08:58:34.182064 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 08:58:34.182103 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 08:58:34.182137 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 08:58:34.182167 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:58:34.182196 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:58:34.182228 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 08:58:34.182257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:58:34.182292 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:58:34.182321 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:58:34.182350 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:58:34.182379 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:58:34.182413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:58:34.182441 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:58:34.182492 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 08:58:34.182523 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:58:34.182553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:58:34.182588 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 08:58:34.182621 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 08:58:34.182709 systemd-journald[1567]: Collecting audit messages is disabled. Jul 2 08:58:34.182772 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 08:58:34.182809 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:58:34.182842 systemd-journald[1567]: Journal started Jul 2 08:58:34.182890 systemd-journald[1567]: Runtime Journal (/run/log/journal/ec2016676926ce9563c7def3c6570527) is 8.0M, max 75.3M, 67.3M free. Jul 2 08:58:33.485861 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:58:33.564113 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 08:58:33.564955 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:58:34.184849 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:58:34.190546 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 08:58:34.207488 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 08:58:34.220126 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 08:58:34.220220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:58:34.236509 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 08:58:34.236597 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:58:34.251708 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 08:58:34.251795 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:58:34.274776 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 08:58:34.274868 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:58:34.281139 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 08:58:34.284094 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:58:34.286753 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 08:58:34.289755 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 08:58:34.292835 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 08:58:34.295576 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 08:58:34.341577 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 08:58:34.382573 kernel: loop0: detected capacity change from 0 to 59672 Jul 2 08:58:34.382656 kernel: block loop0: the capability attribute has been deprecated. Jul 2 08:58:34.383667 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:58:34.389630 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 08:58:34.391900 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 08:58:34.400866 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 08:58:34.413780 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 08:58:34.421717 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:58:34.436875 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 08:58:34.441709 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 08:58:34.451629 systemd-journald[1567]: Time spent on flushing to /var/log/journal/ec2016676926ce9563c7def3c6570527 is 73.016ms for 915 entries. Jul 2 08:58:34.451629 systemd-journald[1567]: System Journal (/var/log/journal/ec2016676926ce9563c7def3c6570527) is 8.0M, max 195.6M, 187.6M free. Jul 2 08:58:34.543360 systemd-journald[1567]: Received client request to flush runtime journal. Jul 2 08:58:34.543437 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:58:34.491082 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:58:34.496962 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 08:58:34.512713 udevadm[1623]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 08:58:34.550540 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 08:58:34.568083 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 08:58:34.578842 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:58:34.593481 kernel: loop1: detected capacity change from 0 to 113672 Jul 2 08:58:34.598552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:58:34.633259 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. Jul 2 08:58:34.633866 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. Jul 2 08:58:34.642077 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:58:34.754514 kernel: loop2: detected capacity change from 0 to 51896 Jul 2 08:58:34.897502 kernel: loop3: detected capacity change from 0 to 194096 Jul 2 08:58:34.955585 kernel: loop4: detected capacity change from 0 to 59672 Jul 2 08:58:34.977531 kernel: loop5: detected capacity change from 0 to 113672 Jul 2 08:58:34.994545 kernel: loop6: detected capacity change from 0 to 51896 Jul 2 08:58:35.005501 kernel: loop7: detected capacity change from 0 to 194096 Jul 2 08:58:35.020328 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 08:58:35.023072 (sd-merge)[1641]: Merged extensions into '/usr'. Jul 2 08:58:35.032778 systemd[1]: Reloading requested from client PID 1595 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 08:58:35.032939 systemd[1]: Reloading... Jul 2 08:58:35.215250 zram_generator::config[1665]: No configuration found. Jul 2 08:58:35.486345 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:58:35.609599 systemd[1]: Reloading finished in 575 ms. Jul 2 08:58:35.649333 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 08:58:35.652566 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 08:58:35.666777 systemd[1]: Starting ensure-sysext.service... Jul 2 08:58:35.675826 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:58:35.687838 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:58:35.700484 systemd[1]: Reloading requested from client PID 1717 ('systemctl') (unit ensure-sysext.service)... Jul 2 08:58:35.702147 systemd[1]: Reloading... Jul 2 08:58:35.762301 systemd-tmpfiles[1718]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:58:35.763023 systemd-tmpfiles[1718]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 08:58:35.766268 systemd-tmpfiles[1718]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:58:35.767057 systemd-tmpfiles[1718]: ACLs are not supported, ignoring. Jul 2 08:58:35.767674 systemd-tmpfiles[1718]: ACLs are not supported, ignoring. Jul 2 08:58:35.780430 systemd-tmpfiles[1718]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:58:35.780963 systemd-tmpfiles[1718]: Skipping /boot Jul 2 08:58:35.809848 systemd-tmpfiles[1718]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:58:35.810017 systemd-tmpfiles[1718]: Skipping /boot Jul 2 08:58:35.844687 systemd-udevd[1719]: Using default interface naming scheme 'v255'. Jul 2 08:58:35.904494 zram_generator::config[1750]: No configuration found. Jul 2 08:58:36.095615 (udev-worker)[1781]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:58:36.117513 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1797) Jul 2 08:58:36.285146 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:58:36.304510 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1786) Jul 2 08:58:36.396512 ldconfig[1592]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:58:36.495686 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 08:58:36.496419 systemd[1]: Reloading finished in 793 ms. Jul 2 08:58:36.553113 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:58:36.556293 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 08:58:36.567638 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:58:36.669957 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 08:58:36.688503 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 08:58:36.697518 systemd[1]: Finished ensure-sysext.service. Jul 2 08:58:36.706798 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:58:36.717948 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 08:58:36.720472 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:58:36.725772 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 08:58:36.731911 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:58:36.738798 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:58:36.743834 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:58:36.753694 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:58:36.755913 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:58:36.759940 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 08:58:36.772770 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 08:58:36.787894 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:58:36.816438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:58:36.818467 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 08:58:36.826758 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 08:58:36.834045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:58:36.846560 lvm[1916]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:58:36.855152 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:58:36.856985 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:58:36.861610 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:58:36.861918 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:58:36.873690 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:58:36.889928 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 08:58:36.892620 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:58:36.894541 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:58:36.910183 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:58:36.910534 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:58:36.914579 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:58:36.937676 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 08:58:36.946642 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:58:36.957014 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 08:58:36.963353 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 08:58:36.998510 lvm[1946]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:58:37.015198 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 08:58:37.015871 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:58:37.022557 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 08:58:37.031942 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 08:58:37.035791 augenrules[1952]: No rules Jul 2 08:58:37.047112 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 08:58:37.049030 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:58:37.082193 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 08:58:37.091554 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 08:58:37.112727 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 08:58:37.153623 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:58:37.220678 systemd-networkd[1929]: lo: Link UP Jul 2 08:58:37.220695 systemd-networkd[1929]: lo: Gained carrier Jul 2 08:58:37.224376 systemd-networkd[1929]: Enumeration completed Jul 2 08:58:37.224767 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:58:37.229385 systemd-networkd[1929]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:58:37.230074 systemd-networkd[1929]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:58:37.233306 systemd-resolved[1930]: Positive Trust Anchors: Jul 2 08:58:37.233867 systemd-networkd[1929]: eth0: Link UP Jul 2 08:58:37.234152 systemd-networkd[1929]: eth0: Gained carrier Jul 2 08:58:37.234188 systemd-networkd[1929]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:58:37.234811 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 08:58:37.237147 systemd-resolved[1930]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:58:37.237219 systemd-resolved[1930]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:58:37.247554 systemd-networkd[1929]: eth0: DHCPv4 address 172.31.26.125/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 08:58:37.257988 systemd-resolved[1930]: Defaulting to hostname 'linux'. Jul 2 08:58:37.261270 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:58:37.263638 systemd[1]: Reached target network.target - Network. Jul 2 08:58:37.265403 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:58:37.267596 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:58:37.269709 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 08:58:37.272181 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 08:58:37.274693 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 08:58:37.276881 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 08:58:37.279879 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 08:58:37.282248 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:58:37.282309 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:58:37.283919 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:58:37.286591 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 08:58:37.292845 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 08:58:37.303751 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 08:58:37.306870 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 08:58:37.309117 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:58:37.311280 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:58:37.313380 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:58:37.313437 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:58:37.325757 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 08:58:37.331160 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 08:58:37.335787 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 08:58:37.347768 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 08:58:37.360835 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 08:58:37.362807 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 08:58:37.369793 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 08:58:37.374913 jq[1980]: false Jul 2 08:58:37.379558 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 08:58:37.385723 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 08:58:37.391694 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 08:58:37.397767 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 08:58:37.417878 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 08:58:37.435725 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 08:58:37.438427 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:58:37.439363 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:58:37.444648 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 08:58:37.450630 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 08:58:37.455654 dbus-daemon[1979]: [system] SELinux support is enabled Jul 2 08:58:37.458848 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 08:58:37.461345 dbus-daemon[1979]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1929 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 08:58:37.468508 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:58:37.468962 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 08:58:37.477049 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:58:37.477134 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 08:58:37.479699 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:58:37.481126 dbus-daemon[1979]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 08:58:37.479754 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 08:58:37.496170 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:58:37.497581 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 08:58:37.520340 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 08:58:37.568366 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:58:37.570590 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 08:58:37.595595 ntpd[1983]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 08:58:37.596897 ntpd[1983]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 08:58:37.609870 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 08:58:37.609870 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 08:58:37.609870 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: ---------------------------------------------------- Jul 2 08:58:37.609870 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: ntp-4 is maintained by Network Time Foundation, Jul 2 08:58:37.609870 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 08:58:37.609870 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: corporation. Support and training for ntp-4 are Jul 2 08:58:37.609870 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: available at https://www.nwtime.org/support Jul 2 08:58:37.609870 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: ---------------------------------------------------- Jul 2 08:58:37.609870 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: proto: precision = 0.096 usec (-23) Jul 2 08:58:37.610520 extend-filesystems[1981]: Found loop4 Jul 2 08:58:37.610520 extend-filesystems[1981]: Found loop5 Jul 2 08:58:37.610520 extend-filesystems[1981]: Found loop6 Jul 2 08:58:37.610520 extend-filesystems[1981]: Found loop7 Jul 2 08:58:37.610520 extend-filesystems[1981]: Found nvme0n1 Jul 2 08:58:37.610520 extend-filesystems[1981]: Found nvme0n1p1 Jul 2 08:58:37.610520 extend-filesystems[1981]: Found nvme0n1p2 Jul 2 08:58:37.610520 extend-filesystems[1981]: Found nvme0n1p3 Jul 2 08:58:37.610520 extend-filesystems[1981]: Found usr Jul 2 08:58:37.610520 extend-filesystems[1981]: Found nvme0n1p4 Jul 2 08:58:37.610520 extend-filesystems[1981]: Found nvme0n1p6 Jul 2 08:58:37.610520 extend-filesystems[1981]: Found nvme0n1p7 Jul 2 08:58:37.610520 extend-filesystems[1981]: Found nvme0n1p9 Jul 2 08:58:37.610520 extend-filesystems[1981]: Checking size of /dev/nvme0n1p9 Jul 2 08:58:37.596918 ntpd[1983]: ---------------------------------------------------- Jul 2 08:58:37.686733 jq[1992]: true Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: basedate set to 2024-06-19 Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: gps base set to 2024-06-23 (week 2320) Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: Listen normally on 3 eth0 172.31.26.125:123 Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: Listen normally on 4 lo [::1]:123 Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: bind(21) AF_INET6 fe80::4c1:89ff:fe73:2935%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: unable to create socket on eth0 (5) for fe80::4c1:89ff:fe73:2935%2#123 Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: failed to init interface for address fe80::4c1:89ff:fe73:2935%2 Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: Listening on routing socket on fd #21 for interface updates Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:58:37.687043 ntpd[1983]: 2 Jul 08:58:37 ntpd[1983]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:58:37.596938 ntpd[1983]: ntp-4 is maintained by Network Time Foundation, Jul 2 08:58:37.687801 tar[2005]: linux-arm64/helm Jul 2 08:58:37.596957 ntpd[1983]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 08:58:37.703986 update_engine[1991]: I0702 08:58:37.693673 1991 main.cc:92] Flatcar Update Engine starting Jul 2 08:58:37.596976 ntpd[1983]: corporation. Support and training for ntp-4 are Jul 2 08:58:37.716273 systemd[1]: Started update-engine.service - Update Engine. Jul 2 08:58:37.733270 update_engine[1991]: I0702 08:58:37.714716 1991 update_check_scheduler.cc:74] Next update check in 2m43s Jul 2 08:58:37.596994 ntpd[1983]: available at https://www.nwtime.org/support Jul 2 08:58:37.725212 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 08:58:37.597011 ntpd[1983]: ---------------------------------------------------- Jul 2 08:58:37.727984 (ntainerd)[2016]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 08:58:37.607635 ntpd[1983]: proto: precision = 0.096 usec (-23) Jul 2 08:58:37.615389 ntpd[1983]: basedate set to 2024-06-19 Jul 2 08:58:37.615422 ntpd[1983]: gps base set to 2024-06-23 (week 2320) Jul 2 08:58:37.630223 ntpd[1983]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 08:58:37.630310 ntpd[1983]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 08:58:37.644733 ntpd[1983]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 08:58:37.644805 ntpd[1983]: Listen normally on 3 eth0 172.31.26.125:123 Jul 2 08:58:37.740039 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 08:58:37.749749 extend-filesystems[1981]: Resized partition /dev/nvme0n1p9 Jul 2 08:58:37.644872 ntpd[1983]: Listen normally on 4 lo [::1]:123 Jul 2 08:58:37.753213 extend-filesystems[2031]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 08:58:37.762817 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 08:58:37.644950 ntpd[1983]: bind(21) AF_INET6 fe80::4c1:89ff:fe73:2935%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:58:37.644991 ntpd[1983]: unable to create socket on eth0 (5) for fe80::4c1:89ff:fe73:2935%2#123 Jul 2 08:58:37.645024 ntpd[1983]: failed to init interface for address fe80::4c1:89ff:fe73:2935%2 Jul 2 08:58:37.645078 ntpd[1983]: Listening on routing socket on fd #21 for interface updates Jul 2 08:58:37.662519 ntpd[1983]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:58:37.662576 ntpd[1983]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 08:58:37.772708 coreos-metadata[1978]: Jul 02 08:58:37.767 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 08:58:37.780830 coreos-metadata[1978]: Jul 02 08:58:37.777 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 08:58:37.780830 coreos-metadata[1978]: Jul 02 08:58:37.780 INFO Fetch successful Jul 2 08:58:37.780830 coreos-metadata[1978]: Jul 02 08:58:37.780 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 08:58:37.782723 jq[2022]: true Jul 2 08:58:37.783154 coreos-metadata[1978]: Jul 02 08:58:37.782 INFO Fetch successful Jul 2 08:58:37.783154 coreos-metadata[1978]: Jul 02 08:58:37.782 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 08:58:37.786400 coreos-metadata[1978]: Jul 02 08:58:37.785 INFO Fetch successful Jul 2 08:58:37.786400 coreos-metadata[1978]: Jul 02 08:58:37.785 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 08:58:37.786400 coreos-metadata[1978]: Jul 02 08:58:37.786 INFO Fetch successful Jul 2 08:58:37.786400 coreos-metadata[1978]: Jul 02 08:58:37.786 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 08:58:37.803713 coreos-metadata[1978]: Jul 02 08:58:37.793 INFO Fetch failed with 404: resource not found Jul 2 08:58:37.803713 coreos-metadata[1978]: Jul 02 08:58:37.793 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 08:58:37.803713 coreos-metadata[1978]: Jul 02 08:58:37.795 INFO Fetch successful Jul 2 08:58:37.803713 coreos-metadata[1978]: Jul 02 08:58:37.795 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 08:58:37.803713 coreos-metadata[1978]: Jul 02 08:58:37.800 INFO Fetch successful Jul 2 08:58:37.803713 coreos-metadata[1978]: Jul 02 08:58:37.800 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 08:58:37.807101 coreos-metadata[1978]: Jul 02 08:58:37.806 INFO Fetch successful Jul 2 08:58:37.807101 coreos-metadata[1978]: Jul 02 08:58:37.806 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 08:58:37.808546 coreos-metadata[1978]: Jul 02 08:58:37.807 INFO Fetch successful Jul 2 08:58:37.808546 coreos-metadata[1978]: Jul 02 08:58:37.807 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 08:58:37.810998 coreos-metadata[1978]: Jul 02 08:58:37.810 INFO Fetch successful Jul 2 08:58:37.856516 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 08:58:37.909689 extend-filesystems[2031]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 08:58:37.909689 extend-filesystems[2031]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 08:58:37.909689 extend-filesystems[2031]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 08:58:37.934792 extend-filesystems[1981]: Resized filesystem in /dev/nvme0n1p9 Jul 2 08:58:37.912233 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:58:37.915672 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 08:58:37.986869 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 08:58:37.994045 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 08:58:37.996502 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 08:58:38.057512 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1779) Jul 2 08:58:38.088480 bash[2063]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:58:38.095082 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 08:58:38.099433 systemd-logind[1990]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 08:58:38.108752 systemd-logind[1990]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 2 08:58:38.109144 systemd-logind[1990]: New seat seat0. Jul 2 08:58:38.122367 dbus-daemon[1979]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 08:58:38.125645 dbus-daemon[1979]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=2003 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 08:58:38.181686 systemd[1]: Starting sshkeys.service... Jul 2 08:58:38.183183 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 08:58:38.185800 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 08:58:38.208968 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 08:58:38.251622 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 08:58:38.262557 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 08:58:38.326103 polkitd[2104]: Started polkitd version 121 Jul 2 08:58:38.350042 polkitd[2104]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 08:58:38.350165 polkitd[2104]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 08:58:38.360031 polkitd[2104]: Finished loading, compiling and executing 2 rules Jul 2 08:58:38.361053 dbus-daemon[1979]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 08:58:38.361363 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 08:58:38.364572 polkitd[2104]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 08:58:38.464715 locksmithd[2028]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:58:38.494182 systemd-hostnamed[2003]: Hostname set to (transient) Jul 2 08:58:38.494344 systemd-resolved[1930]: System hostname changed to 'ip-172-31-26-125'. Jul 2 08:58:38.501472 containerd[2016]: time="2024-07-02T08:58:38.498597922Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 08:58:38.597604 ntpd[1983]: bind(24) AF_INET6 fe80::4c1:89ff:fe73:2935%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:58:38.600856 ntpd[1983]: 2 Jul 08:58:38 ntpd[1983]: bind(24) AF_INET6 fe80::4c1:89ff:fe73:2935%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 08:58:38.600856 ntpd[1983]: 2 Jul 08:58:38 ntpd[1983]: unable to create socket on eth0 (6) for fe80::4c1:89ff:fe73:2935%2#123 Jul 2 08:58:38.600856 ntpd[1983]: 2 Jul 08:58:38 ntpd[1983]: failed to init interface for address fe80::4c1:89ff:fe73:2935%2 Jul 2 08:58:38.600625 ntpd[1983]: unable to create socket on eth0 (6) for fe80::4c1:89ff:fe73:2935%2#123 Jul 2 08:58:38.600655 ntpd[1983]: failed to init interface for address fe80::4c1:89ff:fe73:2935%2 Jul 2 08:58:38.632133 coreos-metadata[2117]: Jul 02 08:58:38.630 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 08:58:38.639492 coreos-metadata[2117]: Jul 02 08:58:38.637 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 08:58:38.645482 coreos-metadata[2117]: Jul 02 08:58:38.643 INFO Fetch successful Jul 2 08:58:38.645482 coreos-metadata[2117]: Jul 02 08:58:38.643 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 08:58:38.653473 coreos-metadata[2117]: Jul 02 08:58:38.651 INFO Fetch successful Jul 2 08:58:38.657505 unknown[2117]: wrote ssh authorized keys file for user: core Jul 2 08:58:38.686849 containerd[2016]: time="2024-07-02T08:58:38.686781107Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 08:58:38.687024 containerd[2016]: time="2024-07-02T08:58:38.686996351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:38.695641 containerd[2016]: time="2024-07-02T08:58:38.694667171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:58:38.695641 containerd[2016]: time="2024-07-02T08:58:38.694736651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:38.695641 containerd[2016]: time="2024-07-02T08:58:38.695086247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:58:38.695641 containerd[2016]: time="2024-07-02T08:58:38.695119751Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:58:38.695641 containerd[2016]: time="2024-07-02T08:58:38.695281787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:38.695641 containerd[2016]: time="2024-07-02T08:58:38.695391983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:58:38.695641 containerd[2016]: time="2024-07-02T08:58:38.695419499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:38.697284 containerd[2016]: time="2024-07-02T08:58:38.697236287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:38.699691 containerd[2016]: time="2024-07-02T08:58:38.699639011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:38.699853 containerd[2016]: time="2024-07-02T08:58:38.699821327Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:58:38.699955 containerd[2016]: time="2024-07-02T08:58:38.699928211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:58:38.700303 containerd[2016]: time="2024-07-02T08:58:38.700264823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:58:38.701042 containerd[2016]: time="2024-07-02T08:58:38.701006963Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:58:38.701275 containerd[2016]: time="2024-07-02T08:58:38.701243459Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:58:38.701584 containerd[2016]: time="2024-07-02T08:58:38.701552207Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:58:38.720577 containerd[2016]: time="2024-07-02T08:58:38.720524891Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:58:38.720793 containerd[2016]: time="2024-07-02T08:58:38.720764003Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:58:38.721097 containerd[2016]: time="2024-07-02T08:58:38.721067183Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:58:38.721813 containerd[2016]: time="2024-07-02T08:58:38.721347719Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 08:58:38.721813 containerd[2016]: time="2024-07-02T08:58:38.721746623Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 08:58:38.722905 containerd[2016]: time="2024-07-02T08:58:38.721778507Z" level=info msg="NRI interface is disabled by configuration." Jul 2 08:58:38.722905 containerd[2016]: time="2024-07-02T08:58:38.722061827Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:58:38.723074 containerd[2016]: time="2024-07-02T08:58:38.723039683Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 08:58:38.723298 containerd[2016]: time="2024-07-02T08:58:38.723190127Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 08:58:38.723545 containerd[2016]: time="2024-07-02T08:58:38.723227039Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 08:58:38.724152 containerd[2016]: time="2024-07-02T08:58:38.723656687Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 08:58:38.724731 containerd[2016]: time="2024-07-02T08:58:38.724274531Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:58:38.724731 containerd[2016]: time="2024-07-02T08:58:38.724324655Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:58:38.724731 containerd[2016]: time="2024-07-02T08:58:38.724383959Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:58:38.724731 containerd[2016]: time="2024-07-02T08:58:38.724420835Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:58:38.724731 containerd[2016]: time="2024-07-02T08:58:38.724506635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:58:38.724731 containerd[2016]: time="2024-07-02T08:58:38.724555907Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:58:38.724731 containerd[2016]: time="2024-07-02T08:58:38.724588511Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:58:38.724731 containerd[2016]: time="2024-07-02T08:58:38.724621355Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:58:38.726595 containerd[2016]: time="2024-07-02T08:58:38.725522999Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:58:38.727763 containerd[2016]: time="2024-07-02T08:58:38.727421075Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:58:38.727763 containerd[2016]: time="2024-07-02T08:58:38.727722035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.728319 containerd[2016]: time="2024-07-02T08:58:38.727854107Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 08:58:38.728319 containerd[2016]: time="2024-07-02T08:58:38.728124443Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:58:38.728980 update-ssh-keys[2179]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:58:38.729372 containerd[2016]: time="2024-07-02T08:58:38.728859383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.729372 containerd[2016]: time="2024-07-02T08:58:38.728928383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.729670511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.730532939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.730584899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.730622147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.730675823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.730709963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.730743587Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.731044583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.731083691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.731114459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.731145959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.731179811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.731265035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.732279 containerd[2016]: time="2024-07-02T08:58:38.731401775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.731094 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 08:58:38.737659 containerd[2016]: time="2024-07-02T08:58:38.731433143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:58:38.737788 containerd[2016]: time="2024-07-02T08:58:38.737413523Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:58:38.740044 containerd[2016]: time="2024-07-02T08:58:38.738215087Z" level=info msg="Connect containerd service" Jul 2 08:58:38.740044 containerd[2016]: time="2024-07-02T08:58:38.738696707Z" level=info msg="using legacy CRI server" Jul 2 08:58:38.740044 containerd[2016]: time="2024-07-02T08:58:38.738725831Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 08:58:38.738697 systemd[1]: Finished sshkeys.service. Jul 2 08:58:38.745563 containerd[2016]: time="2024-07-02T08:58:38.741425231Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:58:38.749269 containerd[2016]: time="2024-07-02T08:58:38.749194511Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:58:38.749390 containerd[2016]: time="2024-07-02T08:58:38.749306399Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:58:38.749390 containerd[2016]: time="2024-07-02T08:58:38.749358887Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 08:58:38.749517 containerd[2016]: time="2024-07-02T08:58:38.749394887Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:58:38.752819 containerd[2016]: time="2024-07-02T08:58:38.749436791Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 08:58:38.753137 containerd[2016]: time="2024-07-02T08:58:38.753070415Z" level=info msg="Start subscribing containerd event" Jul 2 08:58:38.753227 containerd[2016]: time="2024-07-02T08:58:38.753163535Z" level=info msg="Start recovering state" Jul 2 08:58:38.756279 containerd[2016]: time="2024-07-02T08:58:38.755972819Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:58:38.756279 containerd[2016]: time="2024-07-02T08:58:38.756101447Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:58:38.758133 containerd[2016]: time="2024-07-02T08:58:38.756580871Z" level=info msg="Start event monitor" Jul 2 08:58:38.758133 containerd[2016]: time="2024-07-02T08:58:38.756630419Z" level=info msg="Start snapshots syncer" Jul 2 08:58:38.758133 containerd[2016]: time="2024-07-02T08:58:38.756655151Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:58:38.758133 containerd[2016]: time="2024-07-02T08:58:38.756677111Z" level=info msg="Start streaming server" Jul 2 08:58:38.758133 containerd[2016]: time="2024-07-02T08:58:38.756816803Z" level=info msg="containerd successfully booted in 0.264187s" Jul 2 08:58:38.756958 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 08:58:39.086646 systemd-networkd[1929]: eth0: Gained IPv6LL Jul 2 08:58:39.093599 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 08:58:39.097066 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 08:58:39.112035 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 08:58:39.126865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:39.135972 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 08:58:39.179628 tar[2005]: linux-arm64/LICENSE Jul 2 08:58:39.179628 tar[2005]: linux-arm64/README.md Jul 2 08:58:39.186185 sshd_keygen[2015]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:58:39.224589 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 08:58:39.255496 amazon-ssm-agent[2185]: Initializing new seelog logger Jul 2 08:58:39.255496 amazon-ssm-agent[2185]: New Seelog Logger Creation Complete Jul 2 08:58:39.255496 amazon-ssm-agent[2185]: 2024/07/02 08:58:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:39.255496 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:39.255496 amazon-ssm-agent[2185]: 2024/07/02 08:58:39 processing appconfig overrides Jul 2 08:58:39.255496 amazon-ssm-agent[2185]: 2024/07/02 08:58:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:39.257671 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO Proxy environment variables: Jul 2 08:58:39.259192 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:39.259192 amazon-ssm-agent[2185]: 2024/07/02 08:58:39 processing appconfig overrides Jul 2 08:58:39.259192 amazon-ssm-agent[2185]: 2024/07/02 08:58:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:39.259192 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:39.259192 amazon-ssm-agent[2185]: 2024/07/02 08:58:39 processing appconfig overrides Jul 2 08:58:39.267178 amazon-ssm-agent[2185]: 2024/07/02 08:58:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:39.267178 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 08:58:39.267178 amazon-ssm-agent[2185]: 2024/07/02 08:58:39 processing appconfig overrides Jul 2 08:58:39.272644 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 08:58:39.275872 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 08:58:39.298748 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 08:58:39.307956 systemd[1]: Started sshd@0-172.31.26.125:22-147.75.109.163:60576.service - OpenSSH per-connection server daemon (147.75.109.163:60576). Jul 2 08:58:39.333882 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:58:39.334589 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 08:58:39.348914 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 08:58:39.359295 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO https_proxy: Jul 2 08:58:39.388327 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 08:58:39.398040 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 08:58:39.403310 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 08:58:39.407983 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 08:58:39.460252 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO http_proxy: Jul 2 08:58:39.555211 sshd[2213]: Accepted publickey for core from 147.75.109.163 port 60576 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:39.558273 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO no_proxy: Jul 2 08:58:39.561183 sshd[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:39.589891 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 08:58:39.600946 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 08:58:39.612816 systemd-logind[1990]: New session 1 of user core. Jul 2 08:58:39.642387 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 08:58:39.656578 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO Checking if agent identity type OnPrem can be assumed Jul 2 08:58:39.658037 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 08:58:39.677928 (systemd)[2225]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:39.758347 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO Checking if agent identity type EC2 can be assumed Jul 2 08:58:39.855907 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO Agent will take identity from EC2 Jul 2 08:58:39.955239 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:58:40.002550 systemd[2225]: Queued start job for default target default.target. Jul 2 08:58:40.010476 systemd[2225]: Created slice app.slice - User Application Slice. Jul 2 08:58:40.010534 systemd[2225]: Reached target paths.target - Paths. Jul 2 08:58:40.010567 systemd[2225]: Reached target timers.target - Timers. Jul 2 08:58:40.015137 systemd[2225]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 08:58:40.055573 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:58:40.070062 systemd[2225]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 08:58:40.070322 systemd[2225]: Reached target sockets.target - Sockets. Jul 2 08:58:40.070355 systemd[2225]: Reached target basic.target - Basic System. Jul 2 08:58:40.070442 systemd[2225]: Reached target default.target - Main User Target. Jul 2 08:58:40.071096 systemd[2225]: Startup finished in 380ms. Jul 2 08:58:40.071261 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 08:58:40.084085 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 08:58:40.157252 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 08:58:40.256737 systemd[1]: Started sshd@1-172.31.26.125:22-147.75.109.163:60586.service - OpenSSH per-connection server daemon (147.75.109.163:60586). Jul 2 08:58:40.263188 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 08:58:40.360571 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 2 08:58:40.461486 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 08:58:40.467434 sshd[2237]: Accepted publickey for core from 147.75.109.163 port 60586 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:40.471592 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:40.489589 systemd-logind[1990]: New session 2 of user core. Jul 2 08:58:40.494812 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 08:58:40.561595 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 08:58:40.626818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:40.630067 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 08:58:40.632813 sshd[2237]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:40.635363 systemd[1]: Startup finished in 1.141s (kernel) + 8.495s (initrd) + 8.390s (userspace) = 18.027s. Jul 2 08:58:40.649850 systemd[1]: sshd@1-172.31.26.125:22-147.75.109.163:60586.service: Deactivated successfully. Jul 2 08:58:40.657015 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:58:40.662037 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:58:40.663550 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO [Registrar] Starting registrar module Jul 2 08:58:40.663783 systemd-logind[1990]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:58:40.681650 systemd[1]: Started sshd@2-172.31.26.125:22-147.75.109.163:60600.service - OpenSSH per-connection server daemon (147.75.109.163:60600). Jul 2 08:58:40.684230 systemd-logind[1990]: Removed session 2. Jul 2 08:58:40.763975 amazon-ssm-agent[2185]: 2024-07-02 08:58:39 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 08:58:40.865677 amazon-ssm-agent[2185]: 2024-07-02 08:58:40 INFO [EC2Identity] EC2 registration was successful. Jul 2 08:58:40.892485 amazon-ssm-agent[2185]: 2024-07-02 08:58:40 INFO [CredentialRefresher] credentialRefresher has started Jul 2 08:58:40.892485 amazon-ssm-agent[2185]: 2024-07-02 08:58:40 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 08:58:40.892485 amazon-ssm-agent[2185]: 2024-07-02 08:58:40 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 08:58:40.893516 sshd[2252]: Accepted publickey for core from 147.75.109.163 port 60600 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:40.896423 sshd[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:40.907112 systemd-logind[1990]: New session 3 of user core. Jul 2 08:58:40.911749 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 08:58:40.966290 amazon-ssm-agent[2185]: 2024-07-02 08:58:40 INFO [CredentialRefresher] Next credential rotation will be in 30.291653891766668 minutes Jul 2 08:58:41.041411 sshd[2252]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:41.047857 systemd-logind[1990]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:58:41.048567 systemd[1]: sshd@2-172.31.26.125:22-147.75.109.163:60600.service: Deactivated successfully. Jul 2 08:58:41.054102 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:58:41.061165 systemd-logind[1990]: Removed session 3. Jul 2 08:58:41.434577 kubelet[2246]: E0702 08:58:41.434443 2246 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:58:41.438204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:58:41.438548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:58:41.439869 systemd[1]: kubelet.service: Consumed 1.301s CPU time. Jul 2 08:58:41.597595 ntpd[1983]: Listen normally on 7 eth0 [fe80::4c1:89ff:fe73:2935%2]:123 Jul 2 08:58:41.598592 ntpd[1983]: 2 Jul 08:58:41 ntpd[1983]: Listen normally on 7 eth0 [fe80::4c1:89ff:fe73:2935%2]:123 Jul 2 08:58:41.918829 amazon-ssm-agent[2185]: 2024-07-02 08:58:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 08:58:42.019412 amazon-ssm-agent[2185]: 2024-07-02 08:58:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2268) started Jul 2 08:58:42.119868 amazon-ssm-agent[2185]: 2024-07-02 08:58:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 08:58:45.050906 systemd-resolved[1930]: Clock change detected. Flushing caches. Jul 2 08:58:51.538156 systemd[1]: Started sshd@3-172.31.26.125:22-147.75.109.163:37334.service - OpenSSH per-connection server daemon (147.75.109.163:37334). Jul 2 08:58:51.706571 sshd[2279]: Accepted publickey for core from 147.75.109.163 port 37334 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:51.709048 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:51.717052 systemd-logind[1990]: New session 4 of user core. Jul 2 08:58:51.725028 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 08:58:51.851981 sshd[2279]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:51.857512 systemd[1]: sshd@3-172.31.26.125:22-147.75.109.163:37334.service: Deactivated successfully. Jul 2 08:58:51.860381 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:58:51.863544 systemd-logind[1990]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:58:51.865525 systemd-logind[1990]: Removed session 4. Jul 2 08:58:51.895198 systemd[1]: Started sshd@4-172.31.26.125:22-147.75.109.163:37338.service - OpenSSH per-connection server daemon (147.75.109.163:37338). Jul 2 08:58:51.896604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:58:51.902121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:58:52.065957 sshd[2286]: Accepted publickey for core from 147.75.109.163 port 37338 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:52.068524 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:52.077025 systemd-logind[1990]: New session 5 of user core. Jul 2 08:58:52.084017 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 08:58:52.206742 sshd[2286]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:52.214077 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:58:52.214926 systemd-logind[1990]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:58:52.220374 systemd[1]: sshd@4-172.31.26.125:22-147.75.109.163:37338.service: Deactivated successfully. Jul 2 08:58:52.225944 systemd-logind[1990]: Removed session 5. Jul 2 08:58:52.244473 systemd[1]: Started sshd@5-172.31.26.125:22-147.75.109.163:37352.service - OpenSSH per-connection server daemon (147.75.109.163:37352). Jul 2 08:58:52.429035 sshd[2296]: Accepted publickey for core from 147.75.109.163 port 37352 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:52.430100 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:52.446027 systemd-logind[1990]: New session 6 of user core. Jul 2 08:58:52.450153 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 08:58:52.453523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:58:52.466601 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:58:52.538241 kubelet[2302]: E0702 08:58:52.538151 2302 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:58:52.545619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:58:52.546005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:58:52.583796 sshd[2296]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:52.589796 systemd[1]: sshd@5-172.31.26.125:22-147.75.109.163:37352.service: Deactivated successfully. Jul 2 08:58:52.594461 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:58:52.597248 systemd-logind[1990]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:58:52.599046 systemd-logind[1990]: Removed session 6. Jul 2 08:58:52.625187 systemd[1]: Started sshd@6-172.31.26.125:22-147.75.109.163:40778.service - OpenSSH per-connection server daemon (147.75.109.163:40778). Jul 2 08:58:52.789930 sshd[2316]: Accepted publickey for core from 147.75.109.163 port 40778 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:52.792551 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:52.802072 systemd-logind[1990]: New session 7 of user core. Jul 2 08:58:52.808992 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 08:58:52.949372 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 08:58:52.949907 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:58:52.966187 sudo[2319]: pam_unix(sudo:session): session closed for user root Jul 2 08:58:52.989692 sshd[2316]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:52.995146 systemd[1]: sshd@6-172.31.26.125:22-147.75.109.163:40778.service: Deactivated successfully. Jul 2 08:58:52.999031 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:58:53.003027 systemd-logind[1990]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:58:53.004758 systemd-logind[1990]: Removed session 7. Jul 2 08:58:53.032257 systemd[1]: Started sshd@7-172.31.26.125:22-147.75.109.163:40794.service - OpenSSH per-connection server daemon (147.75.109.163:40794). Jul 2 08:58:53.202471 sshd[2324]: Accepted publickey for core from 147.75.109.163 port 40794 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:53.205102 sshd[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:53.212793 systemd-logind[1990]: New session 8 of user core. Jul 2 08:58:53.221963 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 08:58:53.327564 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 08:58:53.328661 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:58:53.334810 sudo[2328]: pam_unix(sudo:session): session closed for user root Jul 2 08:58:53.344866 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 08:58:53.345379 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:58:53.367370 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 08:58:53.382518 auditctl[2331]: No rules Jul 2 08:58:53.383294 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 08:58:53.384799 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 08:58:53.397365 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:58:53.436339 augenrules[2349]: No rules Jul 2 08:58:53.438536 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:58:53.440744 sudo[2327]: pam_unix(sudo:session): session closed for user root Jul 2 08:58:53.465054 sshd[2324]: pam_unix(sshd:session): session closed for user core Jul 2 08:58:53.470517 systemd-logind[1990]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:58:53.471192 systemd[1]: sshd@7-172.31.26.125:22-147.75.109.163:40794.service: Deactivated successfully. Jul 2 08:58:53.474122 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:58:53.477759 systemd-logind[1990]: Removed session 8. Jul 2 08:58:53.504245 systemd[1]: Started sshd@8-172.31.26.125:22-147.75.109.163:40810.service - OpenSSH per-connection server daemon (147.75.109.163:40810). Jul 2 08:58:53.680039 sshd[2357]: Accepted publickey for core from 147.75.109.163 port 40810 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 08:58:53.682506 sshd[2357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:58:53.690042 systemd-logind[1990]: New session 9 of user core. Jul 2 08:58:53.699991 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 08:58:53.803743 sudo[2360]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:58:53.804327 sudo[2360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:58:54.021203 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 08:58:54.030213 (dockerd)[2370]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 08:58:54.445193 dockerd[2370]: time="2024-07-02T08:58:54.444757330Z" level=info msg="Starting up" Jul 2 08:58:54.524300 systemd[1]: var-lib-docker-metacopy\x2dcheck3798113916-merged.mount: Deactivated successfully. Jul 2 08:58:54.554543 dockerd[2370]: time="2024-07-02T08:58:54.554484311Z" level=info msg="Loading containers: start." Jul 2 08:58:54.730020 kernel: Initializing XFRM netlink socket Jul 2 08:58:54.787832 (udev-worker)[2382]: Network interface NamePolicy= disabled on kernel command line. Jul 2 08:58:54.869219 systemd-networkd[1929]: docker0: Link UP Jul 2 08:58:54.887412 dockerd[2370]: time="2024-07-02T08:58:54.887347308Z" level=info msg="Loading containers: done." Jul 2 08:58:54.993942 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1069655786-merged.mount: Deactivated successfully. Jul 2 08:58:55.001218 dockerd[2370]: time="2024-07-02T08:58:55.001145337Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:58:55.001507 dockerd[2370]: time="2024-07-02T08:58:55.001457445Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 08:58:55.001717 dockerd[2370]: time="2024-07-02T08:58:55.001668573Z" level=info msg="Daemon has completed initialization" Jul 2 08:58:55.064296 dockerd[2370]: time="2024-07-02T08:58:55.064071297Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:58:55.066502 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 08:58:56.069836 containerd[2016]: time="2024-07-02T08:58:56.069684694Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 08:58:56.711244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3018744196.mount: Deactivated successfully. Jul 2 08:58:58.387865 containerd[2016]: time="2024-07-02T08:58:58.387765446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:58.389972 containerd[2016]: time="2024-07-02T08:58:58.389899322Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=29940430" Jul 2 08:58:58.391592 containerd[2016]: time="2024-07-02T08:58:58.391518734Z" level=info msg="ImageCreate event name:\"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:58.398680 containerd[2016]: time="2024-07-02T08:58:58.398599202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:58:58.401085 containerd[2016]: time="2024-07-02T08:58:58.400846670Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"29937230\" in 2.331083364s" Jul 2 08:58:58.401085 containerd[2016]: time="2024-07-02T08:58:58.400909778Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\"" Jul 2 08:58:58.443437 containerd[2016]: time="2024-07-02T08:58:58.443384258Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 08:59:00.090382 containerd[2016]: time="2024-07-02T08:59:00.090303170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:00.092447 containerd[2016]: time="2024-07-02T08:59:00.092377574Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=26881371" Jul 2 08:59:00.094165 containerd[2016]: time="2024-07-02T08:59:00.094080278Z" level=info msg="ImageCreate event name:\"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:00.100237 containerd[2016]: time="2024-07-02T08:59:00.100143182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:00.102536 containerd[2016]: time="2024-07-02T08:59:00.102369518Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"28368865\" in 1.658716316s" Jul 2 08:59:00.102536 containerd[2016]: time="2024-07-02T08:59:00.102422990Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\"" Jul 2 08:59:00.142814 containerd[2016]: time="2024-07-02T08:59:00.142756586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 08:59:01.393371 containerd[2016]: time="2024-07-02T08:59:01.392693921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:01.394835 containerd[2016]: time="2024-07-02T08:59:01.394780049Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=16155688" Jul 2 08:59:01.396107 containerd[2016]: time="2024-07-02T08:59:01.395989349Z" level=info msg="ImageCreate event name:\"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:01.401691 containerd[2016]: time="2024-07-02T08:59:01.401606117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:01.404337 containerd[2016]: time="2024-07-02T08:59:01.404117177Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"17643200\" in 1.261104271s" Jul 2 08:59:01.404337 containerd[2016]: time="2024-07-02T08:59:01.404181389Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\"" Jul 2 08:59:01.450279 containerd[2016]: time="2024-07-02T08:59:01.450176609Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 08:59:02.688879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:59:02.698363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:59:03.044176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956923267.mount: Deactivated successfully. Jul 2 08:59:03.308046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:59:03.321252 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:59:03.430200 kubelet[2596]: E0702 08:59:03.430141 2596 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:59:03.435287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:59:03.435635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:59:03.742929 containerd[2016]: time="2024-07-02T08:59:03.742848284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:03.744368 containerd[2016]: time="2024-07-02T08:59:03.744300092Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=25634092" Jul 2 08:59:03.746037 containerd[2016]: time="2024-07-02T08:59:03.745965332Z" level=info msg="ImageCreate event name:\"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:03.749869 containerd[2016]: time="2024-07-02T08:59:03.749818328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:03.751485 containerd[2016]: time="2024-07-02T08:59:03.751183748Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"25633111\" in 2.300918243s" Jul 2 08:59:03.751485 containerd[2016]: time="2024-07-02T08:59:03.751236728Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\"" Jul 2 08:59:03.791291 containerd[2016]: time="2024-07-02T08:59:03.791246109Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 08:59:04.390774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859426196.mount: Deactivated successfully. Jul 2 08:59:05.647956 containerd[2016]: time="2024-07-02T08:59:05.647893018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:05.669059 containerd[2016]: time="2024-07-02T08:59:05.668988598Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jul 2 08:59:05.687155 containerd[2016]: time="2024-07-02T08:59:05.687061090Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:05.706001 containerd[2016]: time="2024-07-02T08:59:05.705908074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:05.707944 containerd[2016]: time="2024-07-02T08:59:05.707686054Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.916208933s" Jul 2 08:59:05.707944 containerd[2016]: time="2024-07-02T08:59:05.707793106Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 08:59:05.751931 containerd[2016]: time="2024-07-02T08:59:05.751798174Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:59:06.308153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775748450.mount: Deactivated successfully. Jul 2 08:59:06.321783 containerd[2016]: time="2024-07-02T08:59:06.321199425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:06.324266 containerd[2016]: time="2024-07-02T08:59:06.324196881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jul 2 08:59:06.326581 containerd[2016]: time="2024-07-02T08:59:06.326511333Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:06.331629 containerd[2016]: time="2024-07-02T08:59:06.331563945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:06.334017 containerd[2016]: time="2024-07-02T08:59:06.333430389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 581.575551ms" Jul 2 08:59:06.334017 containerd[2016]: time="2024-07-02T08:59:06.333483297Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 08:59:06.373627 containerd[2016]: time="2024-07-02T08:59:06.373555593Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 08:59:07.033497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2716292551.mount: Deactivated successfully. Jul 2 08:59:08.972080 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 08:59:09.710727 containerd[2016]: time="2024-07-02T08:59:09.709873610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:09.722735 containerd[2016]: time="2024-07-02T08:59:09.722657510Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jul 2 08:59:09.743798 containerd[2016]: time="2024-07-02T08:59:09.743655014Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:09.768861 containerd[2016]: time="2024-07-02T08:59:09.768747014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:09.771549 containerd[2016]: time="2024-07-02T08:59:09.771323438Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.397700237s" Jul 2 08:59:09.771549 containerd[2016]: time="2024-07-02T08:59:09.771419510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jul 2 08:59:13.438699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 08:59:13.449210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:59:14.026141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:59:14.036562 (kubelet)[2783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:59:14.119747 kubelet[2783]: E0702 08:59:14.119107 2783 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:59:14.125120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:59:14.125614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:59:18.247221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:59:18.255208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:59:18.308619 systemd[1]: Reloading requested from client PID 2797 ('systemctl') (unit session-9.scope)... Jul 2 08:59:18.308645 systemd[1]: Reloading... Jul 2 08:59:18.482774 zram_generator::config[2836]: No configuration found. Jul 2 08:59:18.722625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:59:18.892674 systemd[1]: Reloading finished in 583 ms. Jul 2 08:59:18.978200 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 08:59:18.978422 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 08:59:18.979340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:59:18.990357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:59:19.541533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:59:19.558250 (kubelet)[2899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:59:19.633405 kubelet[2899]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:59:19.633405 kubelet[2899]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:59:19.633405 kubelet[2899]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:59:19.633990 kubelet[2899]: I0702 08:59:19.633509 2899 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:59:20.375825 kubelet[2899]: I0702 08:59:20.375548 2899 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:59:20.375825 kubelet[2899]: I0702 08:59:20.375592 2899 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:59:20.377798 kubelet[2899]: I0702 08:59:20.376366 2899 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:59:20.406075 kubelet[2899]: E0702 08:59:20.406021 2899 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.26.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:20.407429 kubelet[2899]: I0702 08:59:20.407388 2899 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:59:20.429058 kubelet[2899]: I0702 08:59:20.429020 2899 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:59:20.429740 kubelet[2899]: I0702 08:59:20.429686 2899 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:59:20.430107 kubelet[2899]: I0702 08:59:20.429838 2899 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:59:20.430319 kubelet[2899]: I0702 08:59:20.430299 2899 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:59:20.430413 kubelet[2899]: I0702 08:59:20.430396 2899 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:59:20.430771 kubelet[2899]: I0702 08:59:20.430751 2899 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:59:20.432398 kubelet[2899]: I0702 08:59:20.432354 2899 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:59:20.432558 kubelet[2899]: I0702 08:59:20.432534 2899 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:59:20.432744 kubelet[2899]: I0702 08:59:20.432700 2899 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:59:20.432871 kubelet[2899]: I0702 08:59:20.432851 2899 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:59:20.434184 kubelet[2899]: W0702 08:59:20.434119 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:20.434408 kubelet[2899]: E0702 08:59:20.434385 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:20.434652 kubelet[2899]: W0702 08:59:20.434598 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-125&limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:20.434849 kubelet[2899]: E0702 08:59:20.434827 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-125&limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:20.435125 kubelet[2899]: I0702 08:59:20.435092 2899 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:59:20.435625 kubelet[2899]: I0702 08:59:20.435588 2899 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:59:20.435857 kubelet[2899]: W0702 08:59:20.435836 2899 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:59:20.437442 kubelet[2899]: I0702 08:59:20.437402 2899 server.go:1264] "Started kubelet" Jul 2 08:59:20.445746 kubelet[2899]: I0702 08:59:20.445515 2899 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:59:20.450762 kubelet[2899]: E0702 08:59:20.450518 2899 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.125:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.125:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-125.17de59b9613dc623 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-125,UID:ip-172-31-26-125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-125,},FirstTimestamp:2024-07-02 08:59:20.437368355 +0000 UTC m=+0.872159657,LastTimestamp:2024-07-02 08:59:20.437368355 +0000 UTC m=+0.872159657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-125,}" Jul 2 08:59:20.456775 kubelet[2899]: I0702 08:59:20.456543 2899 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:59:20.457907 kubelet[2899]: I0702 08:59:20.457045 2899 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:59:20.458437 kubelet[2899]: I0702 08:59:20.458388 2899 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:59:20.460246 kubelet[2899]: I0702 08:59:20.460148 2899 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:59:20.460550 kubelet[2899]: I0702 08:59:20.460509 2899 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:59:20.461040 kubelet[2899]: E0702 08:59:20.460982 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-125?timeout=10s\": dial tcp 172.31.26.125:6443: connect: connection refused" interval="200ms" Jul 2 08:59:20.461552 kubelet[2899]: I0702 08:59:20.461517 2899 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:59:20.463283 kubelet[2899]: W0702 08:59:20.463184 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:20.463283 kubelet[2899]: E0702 08:59:20.463288 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:20.464223 kubelet[2899]: I0702 08:59:20.464161 2899 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:59:20.466821 kubelet[2899]: I0702 08:59:20.466291 2899 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:59:20.467059 kubelet[2899]: E0702 08:59:20.467025 2899 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:59:20.468235 kubelet[2899]: I0702 08:59:20.468200 2899 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:59:20.468394 kubelet[2899]: I0702 08:59:20.468375 2899 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:59:20.492434 kubelet[2899]: I0702 08:59:20.492361 2899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:59:20.496476 kubelet[2899]: I0702 08:59:20.496417 2899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:59:20.496628 kubelet[2899]: I0702 08:59:20.496568 2899 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:59:20.496628 kubelet[2899]: I0702 08:59:20.496603 2899 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:59:20.496783 kubelet[2899]: E0702 08:59:20.496699 2899 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:59:20.498747 kubelet[2899]: W0702 08:59:20.498414 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:20.498747 kubelet[2899]: E0702 08:59:20.498595 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:20.515869 kubelet[2899]: I0702 08:59:20.515814 2899 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:59:20.515869 kubelet[2899]: I0702 08:59:20.515857 2899 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:59:20.516086 kubelet[2899]: I0702 08:59:20.515893 2899 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:59:20.526620 kubelet[2899]: I0702 08:59:20.526532 2899 policy_none.go:49] "None policy: Start" Jul 2 08:59:20.527623 kubelet[2899]: I0702 08:59:20.527540 2899 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:59:20.527753 kubelet[2899]: I0702 08:59:20.527635 2899 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:59:20.547657 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 08:59:20.560815 kubelet[2899]: I0702 08:59:20.560704 2899 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-125" Jul 2 08:59:20.561496 kubelet[2899]: E0702 08:59:20.561420 2899 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.125:6443/api/v1/nodes\": dial tcp 172.31.26.125:6443: connect: connection refused" node="ip-172-31-26-125" Jul 2 08:59:20.565052 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 08:59:20.574034 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 08:59:20.582335 kubelet[2899]: I0702 08:59:20.582298 2899 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:59:20.583068 kubelet[2899]: I0702 08:59:20.582981 2899 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:59:20.583177 kubelet[2899]: I0702 08:59:20.583160 2899 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:59:20.585473 kubelet[2899]: E0702 08:59:20.585130 2899 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-125\" not found" Jul 2 08:59:20.597481 kubelet[2899]: I0702 08:59:20.597419 2899 topology_manager.go:215] "Topology Admit Handler" podUID="f92ed8ae59b58f70ca71fedb4778a097" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-125" Jul 2 08:59:20.600602 kubelet[2899]: I0702 08:59:20.600160 2899 topology_manager.go:215] "Topology Admit Handler" podUID="dd55bbdf98517d154d76de47272c72aa" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:20.602482 kubelet[2899]: I0702 08:59:20.602322 2899 topology_manager.go:215] "Topology Admit Handler" podUID="e56b1ac3b30ceafbb51d41732ef97c86" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-125" Jul 2 08:59:20.618512 systemd[1]: Created slice kubepods-burstable-podf92ed8ae59b58f70ca71fedb4778a097.slice - libcontainer container kubepods-burstable-podf92ed8ae59b58f70ca71fedb4778a097.slice. Jul 2 08:59:20.651706 systemd[1]: Created slice kubepods-burstable-poddd55bbdf98517d154d76de47272c72aa.slice - libcontainer container kubepods-burstable-poddd55bbdf98517d154d76de47272c72aa.slice. Jul 2 08:59:20.662675 systemd[1]: Created slice kubepods-burstable-pode56b1ac3b30ceafbb51d41732ef97c86.slice - libcontainer container kubepods-burstable-pode56b1ac3b30ceafbb51d41732ef97c86.slice. Jul 2 08:59:20.663052 kubelet[2899]: E0702 08:59:20.662979 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-125?timeout=10s\": dial tcp 172.31.26.125:6443: connect: connection refused" interval="400ms" Jul 2 08:59:20.667214 kubelet[2899]: I0702 08:59:20.667162 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd55bbdf98517d154d76de47272c72aa-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-125\" (UID: \"dd55bbdf98517d154d76de47272c72aa\") " pod="kube-system/kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:20.667486 kubelet[2899]: I0702 08:59:20.667224 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd55bbdf98517d154d76de47272c72aa-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-125\" (UID: \"dd55bbdf98517d154d76de47272c72aa\") " pod="kube-system/kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:20.667486 kubelet[2899]: I0702 08:59:20.667274 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd55bbdf98517d154d76de47272c72aa-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-125\" (UID: \"dd55bbdf98517d154d76de47272c72aa\") " pod="kube-system/kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:20.667486 kubelet[2899]: I0702 08:59:20.667313 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f92ed8ae59b58f70ca71fedb4778a097-ca-certs\") pod \"kube-apiserver-ip-172-31-26-125\" (UID: \"f92ed8ae59b58f70ca71fedb4778a097\") " pod="kube-system/kube-apiserver-ip-172-31-26-125" Jul 2 08:59:20.667486 kubelet[2899]: I0702 08:59:20.667348 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f92ed8ae59b58f70ca71fedb4778a097-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-125\" (UID: \"f92ed8ae59b58f70ca71fedb4778a097\") " pod="kube-system/kube-apiserver-ip-172-31-26-125" Jul 2 08:59:20.667486 kubelet[2899]: I0702 08:59:20.667387 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd55bbdf98517d154d76de47272c72aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-125\" (UID: \"dd55bbdf98517d154d76de47272c72aa\") " pod="kube-system/kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:20.667804 kubelet[2899]: I0702 08:59:20.667424 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e56b1ac3b30ceafbb51d41732ef97c86-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-125\" (UID: \"e56b1ac3b30ceafbb51d41732ef97c86\") " pod="kube-system/kube-scheduler-ip-172-31-26-125" Jul 2 08:59:20.667804 kubelet[2899]: I0702 08:59:20.667457 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f92ed8ae59b58f70ca71fedb4778a097-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-125\" (UID: \"f92ed8ae59b58f70ca71fedb4778a097\") " pod="kube-system/kube-apiserver-ip-172-31-26-125" Jul 2 08:59:20.667804 kubelet[2899]: I0702 08:59:20.667492 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd55bbdf98517d154d76de47272c72aa-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-125\" (UID: \"dd55bbdf98517d154d76de47272c72aa\") " pod="kube-system/kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:20.764756 kubelet[2899]: I0702 08:59:20.764258 2899 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-125" Jul 2 08:59:20.764756 kubelet[2899]: E0702 08:59:20.764667 2899 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.125:6443/api/v1/nodes\": dial tcp 172.31.26.125:6443: connect: connection refused" node="ip-172-31-26-125" Jul 2 08:59:20.943873 containerd[2016]: time="2024-07-02T08:59:20.943787258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-125,Uid:f92ed8ae59b58f70ca71fedb4778a097,Namespace:kube-system,Attempt:0,}" Jul 2 08:59:20.959509 containerd[2016]: time="2024-07-02T08:59:20.959185046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-125,Uid:dd55bbdf98517d154d76de47272c72aa,Namespace:kube-system,Attempt:0,}" Jul 2 08:59:20.968366 containerd[2016]: time="2024-07-02T08:59:20.968243378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-125,Uid:e56b1ac3b30ceafbb51d41732ef97c86,Namespace:kube-system,Attempt:0,}" Jul 2 08:59:21.064764 kubelet[2899]: E0702 08:59:21.064229 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-125?timeout=10s\": dial tcp 172.31.26.125:6443: connect: connection refused" interval="800ms" Jul 2 08:59:21.166956 kubelet[2899]: I0702 08:59:21.166755 2899 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-125" Jul 2 08:59:21.167327 kubelet[2899]: E0702 08:59:21.167223 2899 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.125:6443/api/v1/nodes\": dial tcp 172.31.26.125:6443: connect: connection refused" node="ip-172-31-26-125" Jul 2 08:59:21.485454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462196332.mount: Deactivated successfully. Jul 2 08:59:21.496151 containerd[2016]: time="2024-07-02T08:59:21.494844313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:59:21.499869 containerd[2016]: time="2024-07-02T08:59:21.499806265Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 2 08:59:21.501487 containerd[2016]: time="2024-07-02T08:59:21.501440809Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:59:21.503938 containerd[2016]: time="2024-07-02T08:59:21.503861629Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:59:21.505419 containerd[2016]: time="2024-07-02T08:59:21.505093465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:59:21.505555 kubelet[2899]: E0702 08:59:21.505263 2899 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.125:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.125:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-125.17de59b9613dc623 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-125,UID:ip-172-31-26-125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-125,},FirstTimestamp:2024-07-02 08:59:20.437368355 +0000 UTC m=+0.872159657,LastTimestamp:2024-07-02 08:59:20.437368355 +0000 UTC m=+0.872159657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-125,}" Jul 2 08:59:21.508274 containerd[2016]: time="2024-07-02T08:59:21.507701929Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:59:21.508274 containerd[2016]: time="2024-07-02T08:59:21.508227721Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:59:21.512809 containerd[2016]: time="2024-07-02T08:59:21.512736301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:59:21.518062 containerd[2016]: time="2024-07-02T08:59:21.517767397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.842139ms" Jul 2 08:59:21.527230 containerd[2016]: time="2024-07-02T08:59:21.527156389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 558.759987ms" Jul 2 08:59:21.527881 containerd[2016]: time="2024-07-02T08:59:21.527625421Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.278855ms" Jul 2 08:59:21.556557 kubelet[2899]: W0702 08:59:21.556462 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:21.556557 kubelet[2899]: E0702 08:59:21.556563 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:21.673135 kubelet[2899]: W0702 08:59:21.672976 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-125&limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:21.673135 kubelet[2899]: E0702 08:59:21.673069 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-125&limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:21.764369 containerd[2016]: time="2024-07-02T08:59:21.763942502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:21.764369 containerd[2016]: time="2024-07-02T08:59:21.764094554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:21.764369 containerd[2016]: time="2024-07-02T08:59:21.764150990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:21.764369 containerd[2016]: time="2024-07-02T08:59:21.764179274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:21.770166 containerd[2016]: time="2024-07-02T08:59:21.770025770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:21.770373 containerd[2016]: time="2024-07-02T08:59:21.770134814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:21.770373 containerd[2016]: time="2024-07-02T08:59:21.770192846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:21.770373 containerd[2016]: time="2024-07-02T08:59:21.770227826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:21.777951 containerd[2016]: time="2024-07-02T08:59:21.777819650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:21.780314 containerd[2016]: time="2024-07-02T08:59:21.779790266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:21.780314 containerd[2016]: time="2024-07-02T08:59:21.779834510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:21.780314 containerd[2016]: time="2024-07-02T08:59:21.779860394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:21.812093 systemd[1]: Started cri-containerd-e03cebc08101eddbdbe6e78dff40a5972ef7cd0e188088d7c5108a6a547594d5.scope - libcontainer container e03cebc08101eddbdbe6e78dff40a5972ef7cd0e188088d7c5108a6a547594d5. Jul 2 08:59:21.838763 systemd[1]: Started cri-containerd-3081fa91cff62a9189528e8b1684084e9e1a6d720d49152551a8c380474a807e.scope - libcontainer container 3081fa91cff62a9189528e8b1684084e9e1a6d720d49152551a8c380474a807e. Jul 2 08:59:21.848401 systemd[1]: Started cri-containerd-d87d02c98ebc4a3a59bd347718d31dfa54bcde9dadb92cc1c32db0dd45f4eec5.scope - libcontainer container d87d02c98ebc4a3a59bd347718d31dfa54bcde9dadb92cc1c32db0dd45f4eec5. Jul 2 08:59:21.860987 kubelet[2899]: W0702 08:59:21.859949 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:21.860987 kubelet[2899]: E0702 08:59:21.860056 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:21.865369 kubelet[2899]: E0702 08:59:21.865296 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-125?timeout=10s\": dial tcp 172.31.26.125:6443: connect: connection refused" interval="1.6s" Jul 2 08:59:21.876556 kubelet[2899]: W0702 08:59:21.876461 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:21.876556 kubelet[2899]: E0702 08:59:21.876564 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:21.946872 containerd[2016]: time="2024-07-02T08:59:21.946625967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-125,Uid:dd55bbdf98517d154d76de47272c72aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"e03cebc08101eddbdbe6e78dff40a5972ef7cd0e188088d7c5108a6a547594d5\"" Jul 2 08:59:21.960823 containerd[2016]: time="2024-07-02T08:59:21.960136287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-125,Uid:f92ed8ae59b58f70ca71fedb4778a097,Namespace:kube-system,Attempt:0,} returns sandbox id \"d87d02c98ebc4a3a59bd347718d31dfa54bcde9dadb92cc1c32db0dd45f4eec5\"" Jul 2 08:59:21.960823 containerd[2016]: time="2024-07-02T08:59:21.960676791Z" level=info msg="CreateContainer within sandbox \"e03cebc08101eddbdbe6e78dff40a5972ef7cd0e188088d7c5108a6a547594d5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:59:21.966166 containerd[2016]: time="2024-07-02T08:59:21.966089127Z" level=info msg="CreateContainer within sandbox \"d87d02c98ebc4a3a59bd347718d31dfa54bcde9dadb92cc1c32db0dd45f4eec5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:59:21.970318 kubelet[2899]: I0702 08:59:21.970164 2899 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-125" Jul 2 08:59:21.970682 kubelet[2899]: E0702 08:59:21.970616 2899 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.125:6443/api/v1/nodes\": dial tcp 172.31.26.125:6443: connect: connection refused" node="ip-172-31-26-125" Jul 2 08:59:21.981474 containerd[2016]: time="2024-07-02T08:59:21.981417411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-125,Uid:e56b1ac3b30ceafbb51d41732ef97c86,Namespace:kube-system,Attempt:0,} returns sandbox id \"3081fa91cff62a9189528e8b1684084e9e1a6d720d49152551a8c380474a807e\"" Jul 2 08:59:21.990839 containerd[2016]: time="2024-07-02T08:59:21.990747759Z" level=info msg="CreateContainer within sandbox \"3081fa91cff62a9189528e8b1684084e9e1a6d720d49152551a8c380474a807e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:59:22.007938 containerd[2016]: time="2024-07-02T08:59:22.007858331Z" level=info msg="CreateContainer within sandbox \"e03cebc08101eddbdbe6e78dff40a5972ef7cd0e188088d7c5108a6a547594d5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3cc1f64cc05139b1219f98fd37bacabccad0486157431407db9d54f418d6e769\"" Jul 2 08:59:22.009047 containerd[2016]: time="2024-07-02T08:59:22.008977511Z" level=info msg="StartContainer for \"3cc1f64cc05139b1219f98fd37bacabccad0486157431407db9d54f418d6e769\"" Jul 2 08:59:22.021739 containerd[2016]: time="2024-07-02T08:59:22.021286571Z" level=info msg="CreateContainer within sandbox \"d87d02c98ebc4a3a59bd347718d31dfa54bcde9dadb92cc1c32db0dd45f4eec5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b870d55544c28b405f354237663ca59a0686ea32506f9b22594b404aeb5b085a\"" Jul 2 08:59:22.025177 containerd[2016]: time="2024-07-02T08:59:22.022160555Z" level=info msg="StartContainer for \"b870d55544c28b405f354237663ca59a0686ea32506f9b22594b404aeb5b085a\"" Jul 2 08:59:22.037617 containerd[2016]: time="2024-07-02T08:59:22.037557275Z" level=info msg="CreateContainer within sandbox \"3081fa91cff62a9189528e8b1684084e9e1a6d720d49152551a8c380474a807e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2d6a1c76612639d0b4616b11ef88db78b30e3c3d0c88882bf3505820fc98c4b0\"" Jul 2 08:59:22.038974 containerd[2016]: time="2024-07-02T08:59:22.038914523Z" level=info msg="StartContainer for \"2d6a1c76612639d0b4616b11ef88db78b30e3c3d0c88882bf3505820fc98c4b0\"" Jul 2 08:59:22.080028 systemd[1]: Started cri-containerd-3cc1f64cc05139b1219f98fd37bacabccad0486157431407db9d54f418d6e769.scope - libcontainer container 3cc1f64cc05139b1219f98fd37bacabccad0486157431407db9d54f418d6e769. Jul 2 08:59:22.133987 systemd[1]: Started cri-containerd-2d6a1c76612639d0b4616b11ef88db78b30e3c3d0c88882bf3505820fc98c4b0.scope - libcontainer container 2d6a1c76612639d0b4616b11ef88db78b30e3c3d0c88882bf3505820fc98c4b0. Jul 2 08:59:22.136215 systemd[1]: Started cri-containerd-b870d55544c28b405f354237663ca59a0686ea32506f9b22594b404aeb5b085a.scope - libcontainer container b870d55544c28b405f354237663ca59a0686ea32506f9b22594b404aeb5b085a. Jul 2 08:59:22.196771 containerd[2016]: time="2024-07-02T08:59:22.196681284Z" level=info msg="StartContainer for \"3cc1f64cc05139b1219f98fd37bacabccad0486157431407db9d54f418d6e769\" returns successfully" Jul 2 08:59:22.276775 containerd[2016]: time="2024-07-02T08:59:22.276055284Z" level=info msg="StartContainer for \"b870d55544c28b405f354237663ca59a0686ea32506f9b22594b404aeb5b085a\" returns successfully" Jul 2 08:59:22.286553 containerd[2016]: time="2024-07-02T08:59:22.286387584Z" level=info msg="StartContainer for \"2d6a1c76612639d0b4616b11ef88db78b30e3c3d0c88882bf3505820fc98c4b0\" returns successfully" Jul 2 08:59:22.430748 kubelet[2899]: E0702 08:59:22.430100 2899 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.26.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.26.125:6443: connect: connection refused Jul 2 08:59:23.061816 update_engine[1991]: I0702 08:59:23.061753 1991 update_attempter.cc:509] Updating boot flags... Jul 2 08:59:23.197874 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3185) Jul 2 08:59:23.576825 kubelet[2899]: I0702 08:59:23.576581 2899 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-125" Jul 2 08:59:23.625777 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3187) Jul 2 08:59:24.040768 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3187) Jul 2 08:59:26.499095 kubelet[2899]: E0702 08:59:26.499015 2899 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-125\" not found" node="ip-172-31-26-125" Jul 2 08:59:26.563607 kubelet[2899]: I0702 08:59:26.562182 2899 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-26-125" Jul 2 08:59:26.845540 kubelet[2899]: E0702 08:59:26.845038 2899 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-26-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-125" Jul 2 08:59:27.438760 kubelet[2899]: I0702 08:59:27.437792 2899 apiserver.go:52] "Watching apiserver" Jul 2 08:59:27.462053 kubelet[2899]: I0702 08:59:27.461995 2899 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:59:28.512702 systemd[1]: Reloading requested from client PID 3440 ('systemctl') (unit session-9.scope)... Jul 2 08:59:28.512763 systemd[1]: Reloading... Jul 2 08:59:28.688775 zram_generator::config[3487]: No configuration found. Jul 2 08:59:28.897224 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:59:29.099679 systemd[1]: Reloading finished in 586 ms. Jul 2 08:59:29.177559 kubelet[2899]: E0702 08:59:29.176203 2899 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ip-172-31-26-125.17de59b9613dc623 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-125,UID:ip-172-31-26-125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-125,},FirstTimestamp:2024-07-02 08:59:20.437368355 +0000 UTC m=+0.872159657,LastTimestamp:2024-07-02 08:59:20.437368355 +0000 UTC m=+0.872159657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-125,}" Jul 2 08:59:29.176835 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:59:29.193442 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:59:29.194101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:59:29.194309 systemd[1]: kubelet.service: Consumed 1.569s CPU time, 114.3M memory peak, 0B memory swap peak. Jul 2 08:59:29.207273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:59:29.777030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:59:29.795607 (kubelet)[3538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:59:29.939781 kubelet[3538]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:59:29.939781 kubelet[3538]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:59:29.939781 kubelet[3538]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:59:29.939781 kubelet[3538]: I0702 08:59:29.938540 3538 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:59:29.947652 kubelet[3538]: I0702 08:59:29.947610 3538 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:59:29.947921 kubelet[3538]: I0702 08:59:29.947899 3538 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:59:29.948407 kubelet[3538]: I0702 08:59:29.948377 3538 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:59:29.951327 kubelet[3538]: I0702 08:59:29.951284 3538 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:59:29.954038 kubelet[3538]: I0702 08:59:29.953999 3538 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:59:29.973081 kubelet[3538]: I0702 08:59:29.972672 3538 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:59:29.973507 kubelet[3538]: I0702 08:59:29.973451 3538 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:59:29.973825 kubelet[3538]: I0702 08:59:29.973508 3538 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:59:29.973986 kubelet[3538]: I0702 08:59:29.973830 3538 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:59:29.973986 kubelet[3538]: I0702 08:59:29.973851 3538 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:59:29.973986 kubelet[3538]: I0702 08:59:29.973911 3538 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:59:29.974179 kubelet[3538]: I0702 08:59:29.974080 3538 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:59:29.974179 kubelet[3538]: I0702 08:59:29.974103 3538 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:59:29.974179 kubelet[3538]: I0702 08:59:29.974154 3538 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:59:29.974925 kubelet[3538]: I0702 08:59:29.974188 3538 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:59:29.985155 kubelet[3538]: I0702 08:59:29.981461 3538 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:59:29.985155 kubelet[3538]: I0702 08:59:29.982832 3538 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:59:29.985155 kubelet[3538]: I0702 08:59:29.983459 3538 server.go:1264] "Started kubelet" Jul 2 08:59:29.990040 kubelet[3538]: I0702 08:59:29.989471 3538 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:59:29.996058 kubelet[3538]: I0702 08:59:29.995980 3538 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:59:29.999867 kubelet[3538]: I0702 08:59:29.999815 3538 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:59:30.003737 kubelet[3538]: I0702 08:59:30.003638 3538 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:59:30.005149 kubelet[3538]: I0702 08:59:30.005109 3538 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:59:30.011793 kubelet[3538]: I0702 08:59:30.010282 3538 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:59:30.012602 kubelet[3538]: I0702 08:59:30.012364 3538 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:59:30.012699 kubelet[3538]: I0702 08:59:30.012641 3538 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:59:30.040047 kubelet[3538]: I0702 08:59:30.039886 3538 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:59:30.046686 kubelet[3538]: I0702 08:59:30.044984 3538 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:59:30.046686 kubelet[3538]: I0702 08:59:30.045062 3538 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:59:30.046686 kubelet[3538]: I0702 08:59:30.045094 3538 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:59:30.046686 kubelet[3538]: E0702 08:59:30.045169 3538 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:59:30.083751 kubelet[3538]: I0702 08:59:30.083432 3538 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:59:30.083751 kubelet[3538]: I0702 08:59:30.083469 3538 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:59:30.083751 kubelet[3538]: I0702 08:59:30.083644 3538 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:59:30.084942 kubelet[3538]: E0702 08:59:30.084448 3538 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:59:30.120013 kubelet[3538]: I0702 08:59:30.119685 3538 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-125" Jul 2 08:59:30.141271 kubelet[3538]: I0702 08:59:30.141220 3538 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-26-125" Jul 2 08:59:30.141575 kubelet[3538]: I0702 08:59:30.141553 3538 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-26-125" Jul 2 08:59:30.146328 kubelet[3538]: E0702 08:59:30.146286 3538 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 08:59:30.234918 kubelet[3538]: I0702 08:59:30.234874 3538 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:59:30.234918 kubelet[3538]: I0702 08:59:30.234906 3538 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:59:30.235135 kubelet[3538]: I0702 08:59:30.234942 3538 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:59:30.235212 kubelet[3538]: I0702 08:59:30.235179 3538 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:59:30.235272 kubelet[3538]: I0702 08:59:30.235200 3538 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:59:30.235272 kubelet[3538]: I0702 08:59:30.235236 3538 policy_none.go:49] "None policy: Start" Jul 2 08:59:30.236816 kubelet[3538]: I0702 08:59:30.236605 3538 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:59:30.236816 kubelet[3538]: I0702 08:59:30.236887 3538 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:59:30.236816 kubelet[3538]: I0702 08:59:30.237234 3538 state_mem.go:75] "Updated machine memory state" Jul 2 08:59:30.253229 kubelet[3538]: I0702 08:59:30.253173 3538 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:59:30.253558 kubelet[3538]: I0702 08:59:30.253485 3538 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:59:30.256215 kubelet[3538]: I0702 08:59:30.255861 3538 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:59:30.347145 kubelet[3538]: I0702 08:59:30.346984 3538 topology_manager.go:215] "Topology Admit Handler" podUID="f92ed8ae59b58f70ca71fedb4778a097" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-125" Jul 2 08:59:30.347793 kubelet[3538]: I0702 08:59:30.347185 3538 topology_manager.go:215] "Topology Admit Handler" podUID="dd55bbdf98517d154d76de47272c72aa" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:30.347793 kubelet[3538]: I0702 08:59:30.347286 3538 topology_manager.go:215] "Topology Admit Handler" podUID="e56b1ac3b30ceafbb51d41732ef97c86" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-125" Jul 2 08:59:30.417076 kubelet[3538]: I0702 08:59:30.416957 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f92ed8ae59b58f70ca71fedb4778a097-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-125\" (UID: \"f92ed8ae59b58f70ca71fedb4778a097\") " pod="kube-system/kube-apiserver-ip-172-31-26-125" Jul 2 08:59:30.417255 kubelet[3538]: I0702 08:59:30.417145 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f92ed8ae59b58f70ca71fedb4778a097-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-125\" (UID: \"f92ed8ae59b58f70ca71fedb4778a097\") " pod="kube-system/kube-apiserver-ip-172-31-26-125" Jul 2 08:59:30.417314 kubelet[3538]: I0702 08:59:30.417290 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e56b1ac3b30ceafbb51d41732ef97c86-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-125\" (UID: \"e56b1ac3b30ceafbb51d41732ef97c86\") " pod="kube-system/kube-scheduler-ip-172-31-26-125" Jul 2 08:59:30.418746 kubelet[3538]: I0702 08:59:30.417453 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f92ed8ae59b58f70ca71fedb4778a097-ca-certs\") pod \"kube-apiserver-ip-172-31-26-125\" (UID: \"f92ed8ae59b58f70ca71fedb4778a097\") " pod="kube-system/kube-apiserver-ip-172-31-26-125" Jul 2 08:59:30.418746 kubelet[3538]: I0702 08:59:30.417605 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd55bbdf98517d154d76de47272c72aa-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-125\" (UID: \"dd55bbdf98517d154d76de47272c72aa\") " pod="kube-system/kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:30.418746 kubelet[3538]: I0702 08:59:30.417706 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd55bbdf98517d154d76de47272c72aa-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-125\" (UID: \"dd55bbdf98517d154d76de47272c72aa\") " pod="kube-system/kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:30.418746 kubelet[3538]: I0702 08:59:30.417809 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd55bbdf98517d154d76de47272c72aa-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-125\" (UID: \"dd55bbdf98517d154d76de47272c72aa\") " pod="kube-system/kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:30.418746 kubelet[3538]: I0702 08:59:30.417900 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd55bbdf98517d154d76de47272c72aa-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-125\" (UID: \"dd55bbdf98517d154d76de47272c72aa\") " pod="kube-system/kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:30.419079 kubelet[3538]: I0702 08:59:30.417976 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd55bbdf98517d154d76de47272c72aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-125\" (UID: \"dd55bbdf98517d154d76de47272c72aa\") " pod="kube-system/kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:30.991815 kubelet[3538]: I0702 08:59:30.991753 3538 apiserver.go:52] "Watching apiserver" Jul 2 08:59:31.013315 kubelet[3538]: I0702 08:59:31.013232 3538 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:59:31.163857 kubelet[3538]: E0702 08:59:31.162848 3538 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-26-125\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-26-125" Jul 2 08:59:31.357235 kubelet[3538]: I0702 08:59:31.356019 3538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-125" podStartSLOduration=1.355996786 podStartE2EDuration="1.355996786s" podCreationTimestamp="2024-07-02 08:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:59:31.302653917 +0000 UTC m=+1.493244968" watchObservedRunningTime="2024-07-02 08:59:31.355996786 +0000 UTC m=+1.546587813" Jul 2 08:59:31.381995 kubelet[3538]: I0702 08:59:31.381785 3538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-125" podStartSLOduration=1.381761326 podStartE2EDuration="1.381761326s" podCreationTimestamp="2024-07-02 08:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:59:31.358793974 +0000 UTC m=+1.549385001" watchObservedRunningTime="2024-07-02 08:59:31.381761326 +0000 UTC m=+1.572352365" Jul 2 08:59:31.420935 kubelet[3538]: I0702 08:59:31.420838 3538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-125" podStartSLOduration=1.420819586 podStartE2EDuration="1.420819586s" podCreationTimestamp="2024-07-02 08:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:59:31.387587794 +0000 UTC m=+1.578178857" watchObservedRunningTime="2024-07-02 08:59:31.420819586 +0000 UTC m=+1.611410613" Jul 2 08:59:36.663916 sudo[2360]: pam_unix(sudo:session): session closed for user root Jul 2 08:59:36.687989 sshd[2357]: pam_unix(sshd:session): session closed for user core Jul 2 08:59:36.693353 systemd[1]: sshd@8-172.31.26.125:22-147.75.109.163:40810.service: Deactivated successfully. Jul 2 08:59:36.699057 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:59:36.700131 systemd[1]: session-9.scope: Consumed 11.859s CPU time, 134.6M memory peak, 0B memory swap peak. Jul 2 08:59:36.702089 systemd-logind[1990]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:59:36.705570 systemd-logind[1990]: Removed session 9. Jul 2 08:59:44.431835 kubelet[3538]: I0702 08:59:44.431500 3538 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:59:44.435104 containerd[2016]: time="2024-07-02T08:59:44.433300402Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:59:44.435835 kubelet[3538]: I0702 08:59:44.433633 3538 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:59:44.457566 kubelet[3538]: I0702 08:59:44.457488 3538 topology_manager.go:215] "Topology Admit Handler" podUID="f5838422-25d0-48f6-9501-fbf429ece15b" podNamespace="kube-system" podName="kube-proxy-xkdp4" Jul 2 08:59:44.474857 systemd[1]: Created slice kubepods-besteffort-podf5838422_25d0_48f6_9501_fbf429ece15b.slice - libcontainer container kubepods-besteffort-podf5838422_25d0_48f6_9501_fbf429ece15b.slice. Jul 2 08:59:44.482681 kubelet[3538]: W0702 08:59:44.482299 3538 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-26-125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-125' and this object Jul 2 08:59:44.482681 kubelet[3538]: E0702 08:59:44.482359 3538 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-26-125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-125' and this object Jul 2 08:59:44.482681 kubelet[3538]: W0702 08:59:44.482551 3538 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-26-125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-125' and this object Jul 2 08:59:44.482681 kubelet[3538]: E0702 08:59:44.482585 3538 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-26-125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-125' and this object Jul 2 08:59:44.506009 kubelet[3538]: I0702 08:59:44.505759 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f5838422-25d0-48f6-9501-fbf429ece15b-kube-proxy\") pod \"kube-proxy-xkdp4\" (UID: \"f5838422-25d0-48f6-9501-fbf429ece15b\") " pod="kube-system/kube-proxy-xkdp4" Jul 2 08:59:44.506009 kubelet[3538]: I0702 08:59:44.505824 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg649\" (UniqueName: \"kubernetes.io/projected/f5838422-25d0-48f6-9501-fbf429ece15b-kube-api-access-gg649\") pod \"kube-proxy-xkdp4\" (UID: \"f5838422-25d0-48f6-9501-fbf429ece15b\") " pod="kube-system/kube-proxy-xkdp4" Jul 2 08:59:44.506009 kubelet[3538]: I0702 08:59:44.505872 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5838422-25d0-48f6-9501-fbf429ece15b-xtables-lock\") pod \"kube-proxy-xkdp4\" (UID: \"f5838422-25d0-48f6-9501-fbf429ece15b\") " pod="kube-system/kube-proxy-xkdp4" Jul 2 08:59:44.506009 kubelet[3538]: I0702 08:59:44.505905 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5838422-25d0-48f6-9501-fbf429ece15b-lib-modules\") pod \"kube-proxy-xkdp4\" (UID: \"f5838422-25d0-48f6-9501-fbf429ece15b\") " pod="kube-system/kube-proxy-xkdp4" Jul 2 08:59:44.619755 kubelet[3538]: I0702 08:59:44.618126 3538 topology_manager.go:215] "Topology Admit Handler" podUID="3f148c3c-d92f-4e76-8cf6-0fd930977775" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-2hkxc" Jul 2 08:59:44.635529 systemd[1]: Created slice kubepods-besteffort-pod3f148c3c_d92f_4e76_8cf6_0fd930977775.slice - libcontainer container kubepods-besteffort-pod3f148c3c_d92f_4e76_8cf6_0fd930977775.slice. Jul 2 08:59:44.707964 kubelet[3538]: I0702 08:59:44.707834 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3f148c3c-d92f-4e76-8cf6-0fd930977775-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-2hkxc\" (UID: \"3f148c3c-d92f-4e76-8cf6-0fd930977775\") " pod="tigera-operator/tigera-operator-76ff79f7fd-2hkxc" Jul 2 08:59:44.708177 kubelet[3538]: I0702 08:59:44.707983 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvdx\" (UniqueName: \"kubernetes.io/projected/3f148c3c-d92f-4e76-8cf6-0fd930977775-kube-api-access-ssvdx\") pod \"tigera-operator-76ff79f7fd-2hkxc\" (UID: \"3f148c3c-d92f-4e76-8cf6-0fd930977775\") " pod="tigera-operator/tigera-operator-76ff79f7fd-2hkxc" Jul 2 08:59:44.942741 containerd[2016]: time="2024-07-02T08:59:44.942614005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-2hkxc,Uid:3f148c3c-d92f-4e76-8cf6-0fd930977775,Namespace:tigera-operator,Attempt:0,}" Jul 2 08:59:44.994479 containerd[2016]: time="2024-07-02T08:59:44.994095961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:44.994479 containerd[2016]: time="2024-07-02T08:59:44.994185217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:44.994479 containerd[2016]: time="2024-07-02T08:59:44.994259449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:44.994479 containerd[2016]: time="2024-07-02T08:59:44.994320241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:45.030258 systemd[1]: run-containerd-runc-k8s.io-bb60d6b34d5fb1df6002a340398814ea61716adf53776a58b30f005369086937-runc.sZlVKY.mount: Deactivated successfully. Jul 2 08:59:45.042048 systemd[1]: Started cri-containerd-bb60d6b34d5fb1df6002a340398814ea61716adf53776a58b30f005369086937.scope - libcontainer container bb60d6b34d5fb1df6002a340398814ea61716adf53776a58b30f005369086937. Jul 2 08:59:45.113452 containerd[2016]: time="2024-07-02T08:59:45.113391310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-2hkxc,Uid:3f148c3c-d92f-4e76-8cf6-0fd930977775,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bb60d6b34d5fb1df6002a340398814ea61716adf53776a58b30f005369086937\"" Jul 2 08:59:45.116587 containerd[2016]: time="2024-07-02T08:59:45.116447974Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 08:59:45.658800 kubelet[3538]: E0702 08:59:45.658364 3538 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 2 08:59:45.658800 kubelet[3538]: E0702 08:59:45.658412 3538 projected.go:200] Error preparing data for projected volume kube-api-access-gg649 for pod kube-system/kube-proxy-xkdp4: failed to sync configmap cache: timed out waiting for the condition Jul 2 08:59:45.658800 kubelet[3538]: E0702 08:59:45.658499 3538 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f5838422-25d0-48f6-9501-fbf429ece15b-kube-api-access-gg649 podName:f5838422-25d0-48f6-9501-fbf429ece15b nodeName:}" failed. No retries permitted until 2024-07-02 08:59:46.158468909 +0000 UTC m=+16.349059936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gg649" (UniqueName: "kubernetes.io/projected/f5838422-25d0-48f6-9501-fbf429ece15b-kube-api-access-gg649") pod "kube-proxy-xkdp4" (UID: "f5838422-25d0-48f6-9501-fbf429ece15b") : failed to sync configmap cache: timed out waiting for the condition Jul 2 08:59:46.292582 containerd[2016]: time="2024-07-02T08:59:46.292045368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xkdp4,Uid:f5838422-25d0-48f6-9501-fbf429ece15b,Namespace:kube-system,Attempt:0,}" Jul 2 08:59:46.341971 containerd[2016]: time="2024-07-02T08:59:46.341516652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:46.341971 containerd[2016]: time="2024-07-02T08:59:46.341625564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:46.341971 containerd[2016]: time="2024-07-02T08:59:46.341671272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:46.342226 containerd[2016]: time="2024-07-02T08:59:46.341996640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:46.391038 systemd[1]: Started cri-containerd-2d9a0efb47423c673410c4ba951021ac43347d108970c0b54b89485c4260d028.scope - libcontainer container 2d9a0efb47423c673410c4ba951021ac43347d108970c0b54b89485c4260d028. Jul 2 08:59:46.469344 containerd[2016]: time="2024-07-02T08:59:46.469123525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xkdp4,Uid:f5838422-25d0-48f6-9501-fbf429ece15b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d9a0efb47423c673410c4ba951021ac43347d108970c0b54b89485c4260d028\"" Jul 2 08:59:46.477320 containerd[2016]: time="2024-07-02T08:59:46.477164977Z" level=info msg="CreateContainer within sandbox \"2d9a0efb47423c673410c4ba951021ac43347d108970c0b54b89485c4260d028\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:59:46.514121 containerd[2016]: time="2024-07-02T08:59:46.513985189Z" level=info msg="CreateContainer within sandbox \"2d9a0efb47423c673410c4ba951021ac43347d108970c0b54b89485c4260d028\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5ab710ef70165698fc4e9eae225ef750ec21b5603e62a4e52dad4ecabb20d15e\"" Jul 2 08:59:46.516770 containerd[2016]: time="2024-07-02T08:59:46.516087181Z" level=info msg="StartContainer for \"5ab710ef70165698fc4e9eae225ef750ec21b5603e62a4e52dad4ecabb20d15e\"" Jul 2 08:59:46.570015 systemd[1]: Started cri-containerd-5ab710ef70165698fc4e9eae225ef750ec21b5603e62a4e52dad4ecabb20d15e.scope - libcontainer container 5ab710ef70165698fc4e9eae225ef750ec21b5603e62a4e52dad4ecabb20d15e. Jul 2 08:59:46.631551 containerd[2016]: time="2024-07-02T08:59:46.630059317Z" level=info msg="StartContainer for \"5ab710ef70165698fc4e9eae225ef750ec21b5603e62a4e52dad4ecabb20d15e\" returns successfully" Jul 2 08:59:47.225319 kubelet[3538]: I0702 08:59:47.225205 3538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xkdp4" podStartSLOduration=3.22516386 podStartE2EDuration="3.22516386s" podCreationTimestamp="2024-07-02 08:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:59:47.224790012 +0000 UTC m=+17.415381063" watchObservedRunningTime="2024-07-02 08:59:47.22516386 +0000 UTC m=+17.415754899" Jul 2 08:59:47.235169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073987701.mount: Deactivated successfully. Jul 2 08:59:47.573266 containerd[2016]: time="2024-07-02T08:59:47.572800262Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:47.574926 containerd[2016]: time="2024-07-02T08:59:47.574515938Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473638" Jul 2 08:59:47.576120 containerd[2016]: time="2024-07-02T08:59:47.576044378Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:47.580357 containerd[2016]: time="2024-07-02T08:59:47.580237322Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:47.582018 containerd[2016]: time="2024-07-02T08:59:47.581956274Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.4654072s" Jul 2 08:59:47.582153 containerd[2016]: time="2024-07-02T08:59:47.582016898Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 08:59:47.587407 containerd[2016]: time="2024-07-02T08:59:47.587311874Z" level=info msg="CreateContainer within sandbox \"bb60d6b34d5fb1df6002a340398814ea61716adf53776a58b30f005369086937\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 08:59:47.620369 containerd[2016]: time="2024-07-02T08:59:47.620293010Z" level=info msg="CreateContainer within sandbox \"bb60d6b34d5fb1df6002a340398814ea61716adf53776a58b30f005369086937\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"98903f8cf16ce5cc434ad884b79d00ddbcab14afb8238165ae031863ccbce087\"" Jul 2 08:59:47.622298 containerd[2016]: time="2024-07-02T08:59:47.621597698Z" level=info msg="StartContainer for \"98903f8cf16ce5cc434ad884b79d00ddbcab14afb8238165ae031863ccbce087\"" Jul 2 08:59:47.672036 systemd[1]: Started cri-containerd-98903f8cf16ce5cc434ad884b79d00ddbcab14afb8238165ae031863ccbce087.scope - libcontainer container 98903f8cf16ce5cc434ad884b79d00ddbcab14afb8238165ae031863ccbce087. Jul 2 08:59:47.723422 containerd[2016]: time="2024-07-02T08:59:47.723202599Z" level=info msg="StartContainer for \"98903f8cf16ce5cc434ad884b79d00ddbcab14afb8238165ae031863ccbce087\" returns successfully" Jul 2 08:59:48.207625 kubelet[3538]: I0702 08:59:48.207426 3538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-2hkxc" podStartSLOduration=1.7390419129999999 podStartE2EDuration="4.207405349s" podCreationTimestamp="2024-07-02 08:59:44 +0000 UTC" firstStartedPulling="2024-07-02 08:59:45.11579773 +0000 UTC m=+15.306388757" lastFinishedPulling="2024-07-02 08:59:47.584161178 +0000 UTC m=+17.774752193" observedRunningTime="2024-07-02 08:59:48.206965825 +0000 UTC m=+18.397556852" watchObservedRunningTime="2024-07-02 08:59:48.207405349 +0000 UTC m=+18.397996388" Jul 2 08:59:52.348094 kubelet[3538]: I0702 08:59:52.347201 3538 topology_manager.go:215] "Topology Admit Handler" podUID="2ced013d-3208-4f65-9483-96b6f97cc71c" podNamespace="calico-system" podName="calico-typha-cd4848bc4-t2qnl" Jul 2 08:59:52.361977 kubelet[3538]: I0702 08:59:52.361927 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ced013d-3208-4f65-9483-96b6f97cc71c-tigera-ca-bundle\") pod \"calico-typha-cd4848bc4-t2qnl\" (UID: \"2ced013d-3208-4f65-9483-96b6f97cc71c\") " pod="calico-system/calico-typha-cd4848bc4-t2qnl" Jul 2 08:59:52.362379 kubelet[3538]: I0702 08:59:52.362203 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2ced013d-3208-4f65-9483-96b6f97cc71c-typha-certs\") pod \"calico-typha-cd4848bc4-t2qnl\" (UID: \"2ced013d-3208-4f65-9483-96b6f97cc71c\") " pod="calico-system/calico-typha-cd4848bc4-t2qnl" Jul 2 08:59:52.362379 kubelet[3538]: I0702 08:59:52.362278 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkkjj\" (UniqueName: \"kubernetes.io/projected/2ced013d-3208-4f65-9483-96b6f97cc71c-kube-api-access-mkkjj\") pod \"calico-typha-cd4848bc4-t2qnl\" (UID: \"2ced013d-3208-4f65-9483-96b6f97cc71c\") " pod="calico-system/calico-typha-cd4848bc4-t2qnl" Jul 2 08:59:52.369081 systemd[1]: Created slice kubepods-besteffort-pod2ced013d_3208_4f65_9483_96b6f97cc71c.slice - libcontainer container kubepods-besteffort-pod2ced013d_3208_4f65_9483_96b6f97cc71c.slice. Jul 2 08:59:52.545396 kubelet[3538]: I0702 08:59:52.543738 3538 topology_manager.go:215] "Topology Admit Handler" podUID="f49d4e14-ac84-493a-92cc-31ac6b6ae03b" podNamespace="calico-system" podName="calico-node-z8tlq" Jul 2 08:59:52.562472 systemd[1]: Created slice kubepods-besteffort-podf49d4e14_ac84_493a_92cc_31ac6b6ae03b.slice - libcontainer container kubepods-besteffort-podf49d4e14_ac84_493a_92cc_31ac6b6ae03b.slice. Jul 2 08:59:52.564069 kubelet[3538]: I0702 08:59:52.564036 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-node-certs\") pod \"calico-node-z8tlq\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " pod="calico-system/calico-node-z8tlq" Jul 2 08:59:52.565122 kubelet[3538]: I0702 08:59:52.564972 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-flexvol-driver-host\") pod \"calico-node-z8tlq\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " pod="calico-system/calico-node-z8tlq" Jul 2 08:59:52.566045 kubelet[3538]: I0702 08:59:52.565960 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-lib-modules\") pod \"calico-node-z8tlq\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " pod="calico-system/calico-node-z8tlq" Jul 2 08:59:52.566349 kubelet[3538]: I0702 08:59:52.566184 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-policysync\") pod \"calico-node-z8tlq\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " pod="calico-system/calico-node-z8tlq" Jul 2 08:59:52.566349 kubelet[3538]: I0702 08:59:52.566273 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-cni-net-dir\") pod \"calico-node-z8tlq\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " pod="calico-system/calico-node-z8tlq" Jul 2 08:59:52.566900 kubelet[3538]: I0702 08:59:52.566322 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-tigera-ca-bundle\") pod \"calico-node-z8tlq\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " pod="calico-system/calico-node-z8tlq" Jul 2 08:59:52.566900 kubelet[3538]: I0702 08:59:52.566578 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-var-lib-calico\") pod \"calico-node-z8tlq\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " pod="calico-system/calico-node-z8tlq" Jul 2 08:59:52.567583 kubelet[3538]: I0702 08:59:52.567319 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-xtables-lock\") pod \"calico-node-z8tlq\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " pod="calico-system/calico-node-z8tlq" Jul 2 08:59:52.567583 kubelet[3538]: I0702 08:59:52.567438 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-var-run-calico\") pod \"calico-node-z8tlq\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " pod="calico-system/calico-node-z8tlq" Jul 2 08:59:52.569848 kubelet[3538]: I0702 08:59:52.569779 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-cni-bin-dir\") pod \"calico-node-z8tlq\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " pod="calico-system/calico-node-z8tlq" Jul 2 08:59:52.570441 kubelet[3538]: I0702 08:59:52.570357 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86szs\" (UniqueName: \"kubernetes.io/projected/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-kube-api-access-86szs\") pod \"calico-node-z8tlq\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " pod="calico-system/calico-node-z8tlq" Jul 2 08:59:52.571158 kubelet[3538]: I0702 08:59:52.570664 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-cni-log-dir\") pod \"calico-node-z8tlq\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " pod="calico-system/calico-node-z8tlq" Jul 2 08:59:52.675736 kubelet[3538]: E0702 08:59:52.674551 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.675736 kubelet[3538]: W0702 08:59:52.674591 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.675736 kubelet[3538]: E0702 08:59:52.674649 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.675736 kubelet[3538]: E0702 08:59:52.675068 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.675736 kubelet[3538]: W0702 08:59:52.675086 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.675736 kubelet[3538]: E0702 08:59:52.675140 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.675736 kubelet[3538]: E0702 08:59:52.675486 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.675736 kubelet[3538]: W0702 08:59:52.675503 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.675736 kubelet[3538]: E0702 08:59:52.675558 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.680830 kubelet[3538]: E0702 08:59:52.680791 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.681793 kubelet[3538]: W0702 08:59:52.681755 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.681930 kubelet[3538]: E0702 08:59:52.681906 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.686048 containerd[2016]: time="2024-07-02T08:59:52.683668411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cd4848bc4-t2qnl,Uid:2ced013d-3208-4f65-9483-96b6f97cc71c,Namespace:calico-system,Attempt:0,}" Jul 2 08:59:52.693307 kubelet[3538]: E0702 08:59:52.690069 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.693307 kubelet[3538]: W0702 08:59:52.690101 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.693307 kubelet[3538]: E0702 08:59:52.690148 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.696188 kubelet[3538]: E0702 08:59:52.695548 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.696188 kubelet[3538]: W0702 08:59:52.695589 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.698773 kubelet[3538]: E0702 08:59:52.698384 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.701027 kubelet[3538]: E0702 08:59:52.699931 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.701027 kubelet[3538]: W0702 08:59:52.699970 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.701360 kubelet[3538]: E0702 08:59:52.700287 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.702279 kubelet[3538]: E0702 08:59:52.701613 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.702279 kubelet[3538]: W0702 08:59:52.701650 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.702279 kubelet[3538]: E0702 08:59:52.701839 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.702279 kubelet[3538]: E0702 08:59:52.702202 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.702279 kubelet[3538]: W0702 08:59:52.702223 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.707086 kubelet[3538]: E0702 08:59:52.702543 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.707086 kubelet[3538]: E0702 08:59:52.703878 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.707086 kubelet[3538]: W0702 08:59:52.703909 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.707086 kubelet[3538]: E0702 08:59:52.704031 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.707086 kubelet[3538]: E0702 08:59:52.704654 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.707086 kubelet[3538]: W0702 08:59:52.704676 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.707086 kubelet[3538]: E0702 08:59:52.705199 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.707086 kubelet[3538]: E0702 08:59:52.705910 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.707086 kubelet[3538]: W0702 08:59:52.705938 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.707086 kubelet[3538]: E0702 08:59:52.706158 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.707612 kubelet[3538]: E0702 08:59:52.706669 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.707612 kubelet[3538]: W0702 08:59:52.706695 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.709162 kubelet[3538]: E0702 08:59:52.707807 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.709162 kubelet[3538]: E0702 08:59:52.707904 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.709162 kubelet[3538]: W0702 08:59:52.707922 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.709162 kubelet[3538]: E0702 08:59:52.708891 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.709625 kubelet[3538]: E0702 08:59:52.709221 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.709625 kubelet[3538]: W0702 08:59:52.709243 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.709625 kubelet[3538]: E0702 08:59:52.709361 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.711374 kubelet[3538]: E0702 08:59:52.710098 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.711374 kubelet[3538]: W0702 08:59:52.710127 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.711374 kubelet[3538]: E0702 08:59:52.710848 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.713012 kubelet[3538]: E0702 08:59:52.712835 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.713012 kubelet[3538]: W0702 08:59:52.712875 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.713012 kubelet[3538]: E0702 08:59:52.712953 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.713449 kubelet[3538]: E0702 08:59:52.713300 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.713449 kubelet[3538]: W0702 08:59:52.713327 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.713449 kubelet[3538]: E0702 08:59:52.713397 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.714496 kubelet[3538]: E0702 08:59:52.713688 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.714496 kubelet[3538]: W0702 08:59:52.713739 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.714496 kubelet[3538]: E0702 08:59:52.714029 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.714496 kubelet[3538]: E0702 08:59:52.714093 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.714496 kubelet[3538]: W0702 08:59:52.714108 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.714496 kubelet[3538]: E0702 08:59:52.714145 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.719766 kubelet[3538]: E0702 08:59:52.717109 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.719766 kubelet[3538]: W0702 08:59:52.717162 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.719766 kubelet[3538]: E0702 08:59:52.717196 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.762757 containerd[2016]: time="2024-07-02T08:59:52.761482352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:52.762757 containerd[2016]: time="2024-07-02T08:59:52.761599508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:52.762757 containerd[2016]: time="2024-07-02T08:59:52.761645828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:52.762757 containerd[2016]: time="2024-07-02T08:59:52.761681408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:52.780106 kubelet[3538]: E0702 08:59:52.775468 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.780106 kubelet[3538]: W0702 08:59:52.775502 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.780106 kubelet[3538]: E0702 08:59:52.775533 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.781759 kubelet[3538]: I0702 08:59:52.781333 3538 topology_manager.go:215] "Topology Admit Handler" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" podNamespace="calico-system" podName="csi-node-driver-dbzvw" Jul 2 08:59:52.783971 kubelet[3538]: E0702 08:59:52.783700 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 08:59:52.786748 kubelet[3538]: E0702 08:59:52.785602 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.786974 kubelet[3538]: W0702 08:59:52.786932 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.787130 kubelet[3538]: E0702 08:59:52.787102 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.823062 systemd[1]: Started cri-containerd-6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944.scope - libcontainer container 6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944. Jul 2 08:59:52.858658 kubelet[3538]: E0702 08:59:52.858618 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.859826 kubelet[3538]: W0702 08:59:52.859783 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.860027 kubelet[3538]: E0702 08:59:52.860001 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.861795 kubelet[3538]: E0702 08:59:52.861758 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.861966 kubelet[3538]: W0702 08:59:52.861940 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.862103 kubelet[3538]: E0702 08:59:52.862079 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.863268 kubelet[3538]: E0702 08:59:52.863232 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.865765 kubelet[3538]: W0702 08:59:52.863794 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.865765 kubelet[3538]: E0702 08:59:52.863849 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.866119 kubelet[3538]: E0702 08:59:52.866087 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.866231 kubelet[3538]: W0702 08:59:52.866205 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.866354 kubelet[3538]: E0702 08:59:52.866330 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.867507 kubelet[3538]: E0702 08:59:52.866884 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.867507 kubelet[3538]: W0702 08:59:52.866913 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.867507 kubelet[3538]: E0702 08:59:52.866939 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.868738 kubelet[3538]: E0702 08:59:52.868339 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.869395 kubelet[3538]: W0702 08:59:52.868915 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.869790 kubelet[3538]: E0702 08:59:52.869540 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.871319 kubelet[3538]: E0702 08:59:52.870155 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.871319 kubelet[3538]: W0702 08:59:52.870184 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.871319 kubelet[3538]: E0702 08:59:52.870212 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.873747 kubelet[3538]: E0702 08:59:52.872628 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.873747 kubelet[3538]: W0702 08:59:52.872661 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.873747 kubelet[3538]: E0702 08:59:52.872694 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.875196 kubelet[3538]: E0702 08:59:52.874970 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.875196 kubelet[3538]: W0702 08:59:52.875002 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.875196 kubelet[3538]: E0702 08:59:52.875035 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.876666 kubelet[3538]: E0702 08:59:52.876240 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.876666 kubelet[3538]: W0702 08:59:52.876268 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.876666 kubelet[3538]: E0702 08:59:52.876300 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.877165 kubelet[3538]: E0702 08:59:52.877134 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.877308 kubelet[3538]: W0702 08:59:52.877281 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.877576 kubelet[3538]: E0702 08:59:52.877406 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.877922 kubelet[3538]: E0702 08:59:52.877898 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.878038 kubelet[3538]: W0702 08:59:52.878013 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.878661 kubelet[3538]: E0702 08:59:52.878142 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.879098 containerd[2016]: time="2024-07-02T08:59:52.879053264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z8tlq,Uid:f49d4e14-ac84-493a-92cc-31ac6b6ae03b,Namespace:calico-system,Attempt:0,}" Jul 2 08:59:52.879849 kubelet[3538]: E0702 08:59:52.879617 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.879849 kubelet[3538]: W0702 08:59:52.879842 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.880064 kubelet[3538]: E0702 08:59:52.879911 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.880122 kubelet[3538]: I0702 08:59:52.879955 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f75ea9cc-4b02-4191-80d2-ca7bba2a9b74-varrun\") pod \"csi-node-driver-dbzvw\" (UID: \"f75ea9cc-4b02-4191-80d2-ca7bba2a9b74\") " pod="calico-system/csi-node-driver-dbzvw" Jul 2 08:59:52.882305 kubelet[3538]: E0702 08:59:52.881643 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.882305 kubelet[3538]: W0702 08:59:52.881763 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.882493 kubelet[3538]: E0702 08:59:52.881808 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.884691 kubelet[3538]: E0702 08:59:52.884638 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.884691 kubelet[3538]: W0702 08:59:52.884677 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.884691 kubelet[3538]: E0702 08:59:52.884917 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.885621 kubelet[3538]: E0702 08:59:52.885578 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.885996 kubelet[3538]: W0702 08:59:52.885643 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.885996 kubelet[3538]: E0702 08:59:52.885760 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.886144 kubelet[3538]: I0702 08:59:52.886025 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f75ea9cc-4b02-4191-80d2-ca7bba2a9b74-kubelet-dir\") pod \"csi-node-driver-dbzvw\" (UID: \"f75ea9cc-4b02-4191-80d2-ca7bba2a9b74\") " pod="calico-system/csi-node-driver-dbzvw" Jul 2 08:59:52.888973 kubelet[3538]: E0702 08:59:52.888881 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.888973 kubelet[3538]: W0702 08:59:52.888958 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.889236 kubelet[3538]: E0702 08:59:52.889187 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.891311 kubelet[3538]: E0702 08:59:52.891207 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.891311 kubelet[3538]: W0702 08:59:52.891296 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.892254 kubelet[3538]: E0702 08:59:52.891504 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.892254 kubelet[3538]: E0702 08:59:52.891933 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.892254 kubelet[3538]: W0702 08:59:52.891956 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.892888 kubelet[3538]: E0702 08:59:52.892829 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.893069 kubelet[3538]: I0702 08:59:52.893022 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f75ea9cc-4b02-4191-80d2-ca7bba2a9b74-socket-dir\") pod \"csi-node-driver-dbzvw\" (UID: \"f75ea9cc-4b02-4191-80d2-ca7bba2a9b74\") " pod="calico-system/csi-node-driver-dbzvw" Jul 2 08:59:52.894587 kubelet[3538]: E0702 08:59:52.893849 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.894587 kubelet[3538]: W0702 08:59:52.893889 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.895184 kubelet[3538]: E0702 08:59:52.894896 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.895354 kubelet[3538]: E0702 08:59:52.895253 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.895354 kubelet[3538]: W0702 08:59:52.895297 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.895959 kubelet[3538]: E0702 08:59:52.895897 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.896302 kubelet[3538]: E0702 08:59:52.896188 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.896302 kubelet[3538]: W0702 08:59:52.896219 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.896302 kubelet[3538]: E0702 08:59:52.896261 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.897760 kubelet[3538]: E0702 08:59:52.897661 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.898178 kubelet[3538]: W0702 08:59:52.897701 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.898178 kubelet[3538]: E0702 08:59:52.897814 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.899166 kubelet[3538]: E0702 08:59:52.898868 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.899166 kubelet[3538]: W0702 08:59:52.898903 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.899166 kubelet[3538]: E0702 08:59:52.898949 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.900133 kubelet[3538]: E0702 08:59:52.899887 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.900133 kubelet[3538]: W0702 08:59:52.899919 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.900133 kubelet[3538]: E0702 08:59:52.899963 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.904009 kubelet[3538]: E0702 08:59:52.903875 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.904009 kubelet[3538]: W0702 08:59:52.903912 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.904009 kubelet[3538]: E0702 08:59:52.903960 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.904441 kubelet[3538]: E0702 08:59:52.904377 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.904441 kubelet[3538]: W0702 08:59:52.904398 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.904441 kubelet[3538]: E0702 08:59:52.904435 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.907095 kubelet[3538]: E0702 08:59:52.905468 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.907095 kubelet[3538]: W0702 08:59:52.905507 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.907095 kubelet[3538]: E0702 08:59:52.905541 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.907571 kubelet[3538]: E0702 08:59:52.907523 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:52.907571 kubelet[3538]: W0702 08:59:52.907563 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:52.909935 kubelet[3538]: E0702 08:59:52.907596 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:52.944853 containerd[2016]: time="2024-07-02T08:59:52.942780177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:52.945228 containerd[2016]: time="2024-07-02T08:59:52.945163857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:52.945488 containerd[2016]: time="2024-07-02T08:59:52.945437937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:52.945665 containerd[2016]: time="2024-07-02T08:59:52.945608145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:53.009049 systemd[1]: Started cri-containerd-c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4.scope - libcontainer container c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4. Jul 2 08:59:53.011923 kubelet[3538]: E0702 08:59:53.011860 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.011923 kubelet[3538]: W0702 08:59:53.011907 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.012170 kubelet[3538]: E0702 08:59:53.011942 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.012603 kubelet[3538]: E0702 08:59:53.012565 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.012603 kubelet[3538]: W0702 08:59:53.012596 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.012797 kubelet[3538]: E0702 08:59:53.012667 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.013452 kubelet[3538]: E0702 08:59:53.013271 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.013452 kubelet[3538]: W0702 08:59:53.013302 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.013452 kubelet[3538]: E0702 08:59:53.013338 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.016022 kubelet[3538]: E0702 08:59:53.015964 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.016198 kubelet[3538]: W0702 08:59:53.016009 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.016503 kubelet[3538]: E0702 08:59:53.016293 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.016606 kubelet[3538]: E0702 08:59:53.016567 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.016606 kubelet[3538]: W0702 08:59:53.016584 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.017765 kubelet[3538]: E0702 08:59:53.016812 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.017765 kubelet[3538]: E0702 08:59:53.017089 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.017765 kubelet[3538]: W0702 08:59:53.017107 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.017765 kubelet[3538]: E0702 08:59:53.017258 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.018052 kubelet[3538]: E0702 08:59:53.017833 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.018052 kubelet[3538]: W0702 08:59:53.017856 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.018509 kubelet[3538]: E0702 08:59:53.017937 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.020873 kubelet[3538]: E0702 08:59:53.020812 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.021277 kubelet[3538]: W0702 08:59:53.021062 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.021277 kubelet[3538]: E0702 08:59:53.021140 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.021277 kubelet[3538]: I0702 08:59:53.021193 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkjg9\" (UniqueName: \"kubernetes.io/projected/f75ea9cc-4b02-4191-80d2-ca7bba2a9b74-kube-api-access-pkjg9\") pod \"csi-node-driver-dbzvw\" (UID: \"f75ea9cc-4b02-4191-80d2-ca7bba2a9b74\") " pod="calico-system/csi-node-driver-dbzvw" Jul 2 08:59:53.021975 kubelet[3538]: E0702 08:59:53.021699 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.021975 kubelet[3538]: W0702 08:59:53.021805 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.021975 kubelet[3538]: E0702 08:59:53.021836 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.023230 kubelet[3538]: E0702 08:59:53.022296 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.023230 kubelet[3538]: W0702 08:59:53.022320 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.023230 kubelet[3538]: E0702 08:59:53.022345 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.023230 kubelet[3538]: E0702 08:59:53.022791 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.023230 kubelet[3538]: W0702 08:59:53.022818 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.023230 kubelet[3538]: E0702 08:59:53.022866 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.024118 kubelet[3538]: E0702 08:59:53.023903 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.024118 kubelet[3538]: W0702 08:59:53.023948 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.024118 kubelet[3538]: E0702 08:59:53.023995 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.024850 kubelet[3538]: E0702 08:59:53.024820 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.026746 kubelet[3538]: W0702 08:59:53.024976 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.026746 kubelet[3538]: E0702 08:59:53.025057 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.027765 kubelet[3538]: E0702 08:59:53.027503 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.027765 kubelet[3538]: W0702 08:59:53.027536 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.027765 kubelet[3538]: E0702 08:59:53.027616 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.028524 kubelet[3538]: E0702 08:59:53.028481 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.028840 kubelet[3538]: W0702 08:59:53.028658 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.028840 kubelet[3538]: E0702 08:59:53.028778 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.028980 kubelet[3538]: I0702 08:59:53.028858 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f75ea9cc-4b02-4191-80d2-ca7bba2a9b74-registration-dir\") pod \"csi-node-driver-dbzvw\" (UID: \"f75ea9cc-4b02-4191-80d2-ca7bba2a9b74\") " pod="calico-system/csi-node-driver-dbzvw" Jul 2 08:59:53.029609 kubelet[3538]: E0702 08:59:53.029444 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.029609 kubelet[3538]: W0702 08:59:53.029473 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.029609 kubelet[3538]: E0702 08:59:53.029543 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.030323 kubelet[3538]: E0702 08:59:53.030099 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.030323 kubelet[3538]: W0702 08:59:53.030124 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.030323 kubelet[3538]: E0702 08:59:53.030190 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.030797 kubelet[3538]: E0702 08:59:53.030771 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.031864 kubelet[3538]: W0702 08:59:53.031810 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.032264 kubelet[3538]: E0702 08:59:53.032053 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.032562 kubelet[3538]: E0702 08:59:53.032534 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.032888 kubelet[3538]: W0702 08:59:53.032653 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.032888 kubelet[3538]: E0702 08:59:53.032768 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.033130 kubelet[3538]: E0702 08:59:53.033108 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.033761 kubelet[3538]: W0702 08:59:53.033258 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.033761 kubelet[3538]: E0702 08:59:53.033296 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.036036 kubelet[3538]: E0702 08:59:53.035913 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.036036 kubelet[3538]: W0702 08:59:53.035951 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.036036 kubelet[3538]: E0702 08:59:53.035984 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.130517 kubelet[3538]: E0702 08:59:53.130462 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.130517 kubelet[3538]: W0702 08:59:53.130499 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.130781 kubelet[3538]: E0702 08:59:53.130554 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.131951 kubelet[3538]: E0702 08:59:53.131033 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.131951 kubelet[3538]: W0702 08:59:53.131084 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.131951 kubelet[3538]: E0702 08:59:53.131125 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.133767 kubelet[3538]: E0702 08:59:53.132793 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.133767 kubelet[3538]: W0702 08:59:53.132831 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.133767 kubelet[3538]: E0702 08:59:53.132889 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.135828 kubelet[3538]: E0702 08:59:53.135026 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.135828 kubelet[3538]: W0702 08:59:53.135059 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.135828 kubelet[3538]: E0702 08:59:53.135126 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.136497 kubelet[3538]: E0702 08:59:53.136466 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.136789 kubelet[3538]: W0702 08:59:53.136637 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.136789 kubelet[3538]: E0702 08:59:53.136695 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.137443 kubelet[3538]: E0702 08:59:53.137415 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.137774 kubelet[3538]: W0702 08:59:53.137579 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.137774 kubelet[3538]: E0702 08:59:53.137668 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.139372 kubelet[3538]: E0702 08:59:53.139340 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.139817 kubelet[3538]: W0702 08:59:53.139510 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.139817 kubelet[3538]: E0702 08:59:53.139566 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.140516 kubelet[3538]: E0702 08:59:53.140364 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.140815 kubelet[3538]: W0702 08:59:53.140625 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.140815 kubelet[3538]: E0702 08:59:53.140696 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.144087 kubelet[3538]: E0702 08:59:53.143814 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.144087 kubelet[3538]: W0702 08:59:53.143884 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.144087 kubelet[3538]: E0702 08:59:53.143919 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.145890 kubelet[3538]: E0702 08:59:53.145858 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.146307 kubelet[3538]: W0702 08:59:53.146089 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.146307 kubelet[3538]: E0702 08:59:53.146133 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.151491 containerd[2016]: time="2024-07-02T08:59:53.151112862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cd4848bc4-t2qnl,Uid:2ced013d-3208-4f65-9483-96b6f97cc71c,Namespace:calico-system,Attempt:0,} returns sandbox id \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\"" Jul 2 08:59:53.157044 containerd[2016]: time="2024-07-02T08:59:53.156993930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 08:59:53.177453 kubelet[3538]: E0702 08:59:53.177324 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:53.177453 kubelet[3538]: W0702 08:59:53.177362 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:53.177453 kubelet[3538]: E0702 08:59:53.177395 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:53.209520 containerd[2016]: time="2024-07-02T08:59:53.209157606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z8tlq,Uid:f49d4e14-ac84-493a-92cc-31ac6b6ae03b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\"" Jul 2 08:59:54.046847 kubelet[3538]: E0702 08:59:54.045946 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 08:59:55.602323 containerd[2016]: time="2024-07-02T08:59:55.602255158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:55.606240 containerd[2016]: time="2024-07-02T08:59:55.606153442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 08:59:55.607759 containerd[2016]: time="2024-07-02T08:59:55.607642642Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:55.613661 containerd[2016]: time="2024-07-02T08:59:55.613579510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:55.618524 containerd[2016]: time="2024-07-02T08:59:55.618437350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.461105836s" Jul 2 08:59:55.618524 containerd[2016]: time="2024-07-02T08:59:55.618516742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 08:59:55.622634 containerd[2016]: time="2024-07-02T08:59:55.622571146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 08:59:55.655753 containerd[2016]: time="2024-07-02T08:59:55.654555106Z" level=info msg="CreateContainer within sandbox \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 08:59:55.689229 containerd[2016]: time="2024-07-02T08:59:55.688052650Z" level=info msg="CreateContainer within sandbox \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\"" Jul 2 08:59:55.689728 containerd[2016]: time="2024-07-02T08:59:55.689657314Z" level=info msg="StartContainer for \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\"" Jul 2 08:59:55.764098 systemd[1]: Started cri-containerd-fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b.scope - libcontainer container fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b. Jul 2 08:59:55.884805 containerd[2016]: time="2024-07-02T08:59:55.884112599Z" level=info msg="StartContainer for \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\" returns successfully" Jul 2 08:59:56.050054 kubelet[3538]: E0702 08:59:56.049289 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 08:59:56.232449 containerd[2016]: time="2024-07-02T08:59:56.232381161Z" level=info msg="StopContainer for \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\" with timeout 300 (s)" Jul 2 08:59:56.233816 containerd[2016]: time="2024-07-02T08:59:56.233640897Z" level=info msg="Stop container \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\" with signal terminated" Jul 2 08:59:56.310936 systemd[1]: cri-containerd-fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b.scope: Deactivated successfully. Jul 2 08:59:56.638337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b-rootfs.mount: Deactivated successfully. Jul 2 08:59:57.177341 containerd[2016]: time="2024-07-02T08:59:57.175384570Z" level=info msg="shim disconnected" id=fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b namespace=k8s.io Jul 2 08:59:57.178995 containerd[2016]: time="2024-07-02T08:59:57.178475026Z" level=warning msg="cleaning up after shim disconnected" id=fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b namespace=k8s.io Jul 2 08:59:57.179778 containerd[2016]: time="2024-07-02T08:59:57.179619106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:59:57.243831 containerd[2016]: time="2024-07-02T08:59:57.243758638Z" level=info msg="StopContainer for \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\" returns successfully" Jul 2 08:59:57.246771 containerd[2016]: time="2024-07-02T08:59:57.246284542Z" level=info msg="StopPodSandbox for \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\"" Jul 2 08:59:57.249852 containerd[2016]: time="2024-07-02T08:59:57.246659626Z" level=info msg="Container to stop \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:59:57.253419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944-shm.mount: Deactivated successfully. Jul 2 08:59:57.285438 systemd[1]: cri-containerd-6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944.scope: Deactivated successfully. Jul 2 08:59:57.379126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944-rootfs.mount: Deactivated successfully. Jul 2 08:59:57.433351 containerd[2016]: time="2024-07-02T08:59:57.433175843Z" level=info msg="shim disconnected" id=6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944 namespace=k8s.io Jul 2 08:59:57.435194 containerd[2016]: time="2024-07-02T08:59:57.434783027Z" level=warning msg="cleaning up after shim disconnected" id=6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944 namespace=k8s.io Jul 2 08:59:57.435194 containerd[2016]: time="2024-07-02T08:59:57.434834027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:59:57.453045 containerd[2016]: time="2024-07-02T08:59:57.452917091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:57.455289 containerd[2016]: time="2024-07-02T08:59:57.455183147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 08:59:57.459452 containerd[2016]: time="2024-07-02T08:59:57.459024887Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:57.469343 containerd[2016]: time="2024-07-02T08:59:57.469278335Z" level=info msg="TearDown network for sandbox \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\" successfully" Jul 2 08:59:57.469343 containerd[2016]: time="2024-07-02T08:59:57.469329035Z" level=info msg="StopPodSandbox for \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\" returns successfully" Jul 2 08:59:57.470790 containerd[2016]: time="2024-07-02T08:59:57.470380319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:59:57.480365 containerd[2016]: time="2024-07-02T08:59:57.479861279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.857225777s" Jul 2 08:59:57.480365 containerd[2016]: time="2024-07-02T08:59:57.479929763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 08:59:57.490649 containerd[2016]: time="2024-07-02T08:59:57.490584359Z" level=info msg="CreateContainer within sandbox \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 08:59:57.541600 containerd[2016]: time="2024-07-02T08:59:57.541366584Z" level=info msg="CreateContainer within sandbox \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb\"" Jul 2 08:59:57.542984 containerd[2016]: time="2024-07-02T08:59:57.542618964Z" level=info msg="StartContainer for \"8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb\"" Jul 2 08:59:57.555314 kubelet[3538]: I0702 08:59:57.555232 3538 topology_manager.go:215] "Topology Admit Handler" podUID="ca3edbde-49c6-4e0e-80c6-aa65e98ca1fa" podNamespace="calico-system" podName="calico-typha-869f4d8664-xq8s8" Jul 2 08:59:57.559134 kubelet[3538]: E0702 08:59:57.555344 3538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2ced013d-3208-4f65-9483-96b6f97cc71c" containerName="calico-typha" Jul 2 08:59:57.559134 kubelet[3538]: I0702 08:59:57.555392 3538 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ced013d-3208-4f65-9483-96b6f97cc71c" containerName="calico-typha" Jul 2 08:59:57.570164 kubelet[3538]: E0702 08:59:57.569388 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.570164 kubelet[3538]: W0702 08:59:57.569429 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.570164 kubelet[3538]: E0702 08:59:57.569464 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.570164 kubelet[3538]: I0702 08:59:57.569512 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ced013d-3208-4f65-9483-96b6f97cc71c-tigera-ca-bundle\") pod \"2ced013d-3208-4f65-9483-96b6f97cc71c\" (UID: \"2ced013d-3208-4f65-9483-96b6f97cc71c\") " Jul 2 08:59:57.574096 kubelet[3538]: E0702 08:59:57.573830 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.574649 kubelet[3538]: W0702 08:59:57.573874 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.574649 kubelet[3538]: E0702 08:59:57.574539 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.575504 kubelet[3538]: I0702 08:59:57.575291 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkkjj\" (UniqueName: \"kubernetes.io/projected/2ced013d-3208-4f65-9483-96b6f97cc71c-kube-api-access-mkkjj\") pod \"2ced013d-3208-4f65-9483-96b6f97cc71c\" (UID: \"2ced013d-3208-4f65-9483-96b6f97cc71c\") " Jul 2 08:59:57.578195 systemd[1]: Created slice kubepods-besteffort-podca3edbde_49c6_4e0e_80c6_aa65e98ca1fa.slice - libcontainer container kubepods-besteffort-podca3edbde_49c6_4e0e_80c6_aa65e98ca1fa.slice. Jul 2 08:59:57.584361 kubelet[3538]: E0702 08:59:57.584188 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.584928 kubelet[3538]: W0702 08:59:57.584863 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.584928 kubelet[3538]: E0702 08:59:57.584920 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.585213 kubelet[3538]: I0702 08:59:57.584970 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2ced013d-3208-4f65-9483-96b6f97cc71c-typha-certs\") pod \"2ced013d-3208-4f65-9483-96b6f97cc71c\" (UID: \"2ced013d-3208-4f65-9483-96b6f97cc71c\") " Jul 2 08:59:57.587195 kubelet[3538]: E0702 08:59:57.587079 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.587195 kubelet[3538]: W0702 08:59:57.587123 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.587993 kubelet[3538]: E0702 08:59:57.587376 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.596157 kubelet[3538]: E0702 08:59:57.596035 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.596157 kubelet[3538]: W0702 08:59:57.596073 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.596157 kubelet[3538]: E0702 08:59:57.596107 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.621406 kubelet[3538]: E0702 08:59:57.621268 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.621406 kubelet[3538]: W0702 08:59:57.621326 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.621406 kubelet[3538]: E0702 08:59:57.621362 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.623072 kubelet[3538]: I0702 08:59:57.622963 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ced013d-3208-4f65-9483-96b6f97cc71c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "2ced013d-3208-4f65-9483-96b6f97cc71c" (UID: "2ced013d-3208-4f65-9483-96b6f97cc71c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:59:57.636411 systemd[1]: var-lib-kubelet-pods-2ced013d\x2d3208\x2d4f65\x2d9483\x2d96b6f97cc71c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jul 2 08:59:57.637474 systemd[1]: var-lib-kubelet-pods-2ced013d\x2d3208\x2d4f65\x2d9483\x2d96b6f97cc71c-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jul 2 08:59:57.643550 kubelet[3538]: I0702 08:59:57.643000 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ced013d-3208-4f65-9483-96b6f97cc71c-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "2ced013d-3208-4f65-9483-96b6f97cc71c" (UID: "2ced013d-3208-4f65-9483-96b6f97cc71c"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:59:57.645675 systemd[1]: var-lib-kubelet-pods-2ced013d\x2d3208\x2d4f65\x2d9483\x2d96b6f97cc71c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmkkjj.mount: Deactivated successfully. Jul 2 08:59:57.652770 kubelet[3538]: E0702 08:59:57.651392 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.652770 kubelet[3538]: W0702 08:59:57.651440 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.652770 kubelet[3538]: E0702 08:59:57.651582 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.652770 kubelet[3538]: E0702 08:59:57.652136 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.652770 kubelet[3538]: W0702 08:59:57.652160 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.652770 kubelet[3538]: E0702 08:59:57.652187 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.653512 kubelet[3538]: E0702 08:59:57.653051 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.653512 kubelet[3538]: W0702 08:59:57.653077 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.653512 kubelet[3538]: E0702 08:59:57.653141 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.653512 kubelet[3538]: I0702 08:59:57.651381 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ced013d-3208-4f65-9483-96b6f97cc71c-kube-api-access-mkkjj" (OuterVolumeSpecName: "kube-api-access-mkkjj") pod "2ced013d-3208-4f65-9483-96b6f97cc71c" (UID: "2ced013d-3208-4f65-9483-96b6f97cc71c"). InnerVolumeSpecName "kube-api-access-mkkjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:59:57.654454 kubelet[3538]: E0702 08:59:57.653781 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.654454 kubelet[3538]: W0702 08:59:57.653801 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.654454 kubelet[3538]: E0702 08:59:57.653856 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.654454 kubelet[3538]: E0702 08:59:57.654495 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.654454 kubelet[3538]: W0702 08:59:57.654654 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.654454 kubelet[3538]: E0702 08:59:57.654688 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.656160 kubelet[3538]: E0702 08:59:57.655167 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.656160 kubelet[3538]: W0702 08:59:57.655532 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.656160 kubelet[3538]: E0702 08:59:57.655871 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.657563 kubelet[3538]: E0702 08:59:57.656963 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.657563 kubelet[3538]: W0702 08:59:57.656989 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.657563 kubelet[3538]: E0702 08:59:57.657047 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.658368 kubelet[3538]: E0702 08:59:57.657706 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.658368 kubelet[3538]: W0702 08:59:57.657846 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.658368 kubelet[3538]: E0702 08:59:57.657877 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.658368 kubelet[3538]: E0702 08:59:57.658286 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.658368 kubelet[3538]: W0702 08:59:57.658306 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.658368 kubelet[3538]: E0702 08:59:57.658328 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.658675 kubelet[3538]: E0702 08:59:57.658619 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.658675 kubelet[3538]: W0702 08:59:57.658635 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.658675 kubelet[3538]: E0702 08:59:57.658655 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.661187 kubelet[3538]: E0702 08:59:57.660054 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.661187 kubelet[3538]: W0702 08:59:57.660092 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.661187 kubelet[3538]: E0702 08:59:57.660125 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.661187 kubelet[3538]: E0702 08:59:57.660831 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.661187 kubelet[3538]: W0702 08:59:57.660855 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.661187 kubelet[3538]: E0702 08:59:57.660883 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.686982 kubelet[3538]: E0702 08:59:57.686363 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.686982 kubelet[3538]: W0702 08:59:57.686401 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.686982 kubelet[3538]: E0702 08:59:57.686433 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.686982 kubelet[3538]: I0702 08:59:57.686480 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npg8s\" (UniqueName: \"kubernetes.io/projected/ca3edbde-49c6-4e0e-80c6-aa65e98ca1fa-kube-api-access-npg8s\") pod \"calico-typha-869f4d8664-xq8s8\" (UID: \"ca3edbde-49c6-4e0e-80c6-aa65e98ca1fa\") " pod="calico-system/calico-typha-869f4d8664-xq8s8" Jul 2 08:59:57.690370 kubelet[3538]: E0702 08:59:57.689786 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.690370 kubelet[3538]: W0702 08:59:57.689865 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.690370 kubelet[3538]: E0702 08:59:57.689912 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.690370 kubelet[3538]: I0702 08:59:57.689961 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca3edbde-49c6-4e0e-80c6-aa65e98ca1fa-tigera-ca-bundle\") pod \"calico-typha-869f4d8664-xq8s8\" (UID: \"ca3edbde-49c6-4e0e-80c6-aa65e98ca1fa\") " pod="calico-system/calico-typha-869f4d8664-xq8s8" Jul 2 08:59:57.690700 kubelet[3538]: E0702 08:59:57.690438 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.690700 kubelet[3538]: W0702 08:59:57.690465 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.690700 kubelet[3538]: E0702 08:59:57.690505 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.690700 kubelet[3538]: E0702 08:59:57.690850 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.690700 kubelet[3538]: W0702 08:59:57.690868 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.690700 kubelet[3538]: E0702 08:59:57.690898 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.693844 kubelet[3538]: E0702 08:59:57.693792 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.693844 kubelet[3538]: W0702 08:59:57.693830 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.694818 kubelet[3538]: E0702 08:59:57.693877 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.694818 kubelet[3538]: E0702 08:59:57.694521 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.694818 kubelet[3538]: I0702 08:59:57.694542 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ca3edbde-49c6-4e0e-80c6-aa65e98ca1fa-typha-certs\") pod \"calico-typha-869f4d8664-xq8s8\" (UID: \"ca3edbde-49c6-4e0e-80c6-aa65e98ca1fa\") " pod="calico-system/calico-typha-869f4d8664-xq8s8" Jul 2 08:59:57.694818 kubelet[3538]: I0702 08:59:57.694596 3538 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mkkjj\" (UniqueName: \"kubernetes.io/projected/2ced013d-3208-4f65-9483-96b6f97cc71c-kube-api-access-mkkjj\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:57.694818 kubelet[3538]: I0702 08:59:57.694620 3538 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ced013d-3208-4f65-9483-96b6f97cc71c-tigera-ca-bundle\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:57.694818 kubelet[3538]: I0702 08:59:57.694644 3538 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2ced013d-3208-4f65-9483-96b6f97cc71c-typha-certs\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:57.694818 kubelet[3538]: W0702 08:59:57.694545 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.694818 kubelet[3538]: E0702 08:59:57.694679 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.696771 kubelet[3538]: E0702 08:59:57.696004 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.696771 kubelet[3538]: W0702 08:59:57.696035 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.696771 kubelet[3538]: E0702 08:59:57.696316 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.697779 kubelet[3538]: E0702 08:59:57.697299 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.697779 kubelet[3538]: W0702 08:59:57.697331 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.697779 kubelet[3538]: E0702 08:59:57.697363 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.698317 kubelet[3538]: E0702 08:59:57.698217 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.698317 kubelet[3538]: W0702 08:59:57.698244 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.698317 kubelet[3538]: E0702 08:59:57.698273 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.729025 systemd[1]: Started cri-containerd-8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb.scope - libcontainer container 8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb. Jul 2 08:59:57.796589 kubelet[3538]: E0702 08:59:57.796511 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.796589 kubelet[3538]: W0702 08:59:57.796554 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.796589 kubelet[3538]: E0702 08:59:57.796589 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.798654 kubelet[3538]: E0702 08:59:57.798441 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.798654 kubelet[3538]: W0702 08:59:57.798502 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.798654 kubelet[3538]: E0702 08:59:57.798575 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.800774 kubelet[3538]: E0702 08:59:57.800409 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.800774 kubelet[3538]: W0702 08:59:57.800448 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.801734 kubelet[3538]: E0702 08:59:57.801289 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.803432 kubelet[3538]: E0702 08:59:57.802610 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.803432 kubelet[3538]: W0702 08:59:57.802646 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.803432 kubelet[3538]: E0702 08:59:57.802694 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.803432 kubelet[3538]: E0702 08:59:57.803189 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.803432 kubelet[3538]: W0702 08:59:57.803254 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.805198 kubelet[3538]: E0702 08:59:57.803851 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.805198 kubelet[3538]: W0702 08:59:57.803882 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.805198 kubelet[3538]: E0702 08:59:57.804318 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.805198 kubelet[3538]: W0702 08:59:57.804362 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.805198 kubelet[3538]: E0702 08:59:57.804389 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.805198 kubelet[3538]: E0702 08:59:57.804830 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.805198 kubelet[3538]: W0702 08:59:57.804849 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.805198 kubelet[3538]: E0702 08:59:57.804902 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.805198 kubelet[3538]: E0702 08:59:57.804917 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.805198 kubelet[3538]: E0702 08:59:57.805083 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.805873 kubelet[3538]: E0702 08:59:57.805519 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.805873 kubelet[3538]: W0702 08:59:57.805553 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.805873 kubelet[3538]: E0702 08:59:57.805579 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.806069 kubelet[3538]: E0702 08:59:57.806035 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.806069 kubelet[3538]: W0702 08:59:57.806052 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.807554 kubelet[3538]: E0702 08:59:57.806430 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.807554 kubelet[3538]: W0702 08:59:57.806471 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.808473 kubelet[3538]: E0702 08:59:57.808158 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.808951 kubelet[3538]: E0702 08:59:57.808488 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.813963 kubelet[3538]: E0702 08:59:57.811692 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.816247 kubelet[3538]: W0702 08:59:57.813674 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.816247 kubelet[3538]: E0702 08:59:57.815889 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.817411 kubelet[3538]: E0702 08:59:57.817281 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.817411 kubelet[3538]: W0702 08:59:57.817331 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.817411 kubelet[3538]: E0702 08:59:57.817362 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.819407 kubelet[3538]: E0702 08:59:57.819349 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.819407 kubelet[3538]: W0702 08:59:57.819390 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.819610 kubelet[3538]: E0702 08:59:57.819425 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.822050 kubelet[3538]: E0702 08:59:57.820805 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.822050 kubelet[3538]: W0702 08:59:57.820863 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.822050 kubelet[3538]: E0702 08:59:57.820976 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.826749 kubelet[3538]: E0702 08:59:57.824808 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.826749 kubelet[3538]: W0702 08:59:57.824880 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.826749 kubelet[3538]: E0702 08:59:57.824940 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.842346 kubelet[3538]: E0702 08:59:57.840112 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.842346 kubelet[3538]: W0702 08:59:57.840150 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.842346 kubelet[3538]: E0702 08:59:57.840184 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.851564 containerd[2016]: time="2024-07-02T08:59:57.851469961Z" level=info msg="StartContainer for \"8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb\" returns successfully" Jul 2 08:59:57.861213 kubelet[3538]: E0702 08:59:57.861034 3538 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:59:57.861213 kubelet[3538]: W0702 08:59:57.861070 3538 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:59:57.861213 kubelet[3538]: E0702 08:59:57.861134 3538 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:59:57.891744 systemd[1]: cri-containerd-8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb.scope: Deactivated successfully. Jul 2 08:59:57.921403 containerd[2016]: time="2024-07-02T08:59:57.920852473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-869f4d8664-xq8s8,Uid:ca3edbde-49c6-4e0e-80c6-aa65e98ca1fa,Namespace:calico-system,Attempt:0,}" Jul 2 08:59:58.003316 containerd[2016]: time="2024-07-02T08:59:58.002885698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:58.003316 containerd[2016]: time="2024-07-02T08:59:58.002989762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:58.003316 containerd[2016]: time="2024-07-02T08:59:58.003033586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:58.003316 containerd[2016]: time="2024-07-02T08:59:58.003067594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:58.042487 containerd[2016]: time="2024-07-02T08:59:58.038679394Z" level=info msg="shim disconnected" id=8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb namespace=k8s.io Jul 2 08:59:58.042487 containerd[2016]: time="2024-07-02T08:59:58.042082402Z" level=warning msg="cleaning up after shim disconnected" id=8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb namespace=k8s.io Jul 2 08:59:58.042487 containerd[2016]: time="2024-07-02T08:59:58.042288610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:59:58.050605 kubelet[3538]: E0702 08:59:58.049232 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 08:59:58.071087 systemd[1]: Started cri-containerd-1bbff0e054718e17f79a2c926eb6f78945780b255e8f38a1917b0e8073790c04.scope - libcontainer container 1bbff0e054718e17f79a2c926eb6f78945780b255e8f38a1917b0e8073790c04. Jul 2 08:59:58.093199 systemd[1]: Removed slice kubepods-besteffort-pod2ced013d_3208_4f65_9483_96b6f97cc71c.slice - libcontainer container kubepods-besteffort-pod2ced013d_3208_4f65_9483_96b6f97cc71c.slice. Jul 2 08:59:58.202459 containerd[2016]: time="2024-07-02T08:59:58.202203815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-869f4d8664-xq8s8,Uid:ca3edbde-49c6-4e0e-80c6-aa65e98ca1fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bbff0e054718e17f79a2c926eb6f78945780b255e8f38a1917b0e8073790c04\"" Jul 2 08:59:58.223909 containerd[2016]: time="2024-07-02T08:59:58.223807775Z" level=info msg="CreateContainer within sandbox \"1bbff0e054718e17f79a2c926eb6f78945780b255e8f38a1917b0e8073790c04\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 08:59:58.244779 kubelet[3538]: I0702 08:59:58.244704 3538 scope.go:117] "RemoveContainer" containerID="fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b" Jul 2 08:59:58.257693 containerd[2016]: time="2024-07-02T08:59:58.256224011Z" level=info msg="StopPodSandbox for \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\"" Jul 2 08:59:58.257693 containerd[2016]: time="2024-07-02T08:59:58.256302635Z" level=info msg="Container to stop \"8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:59:58.257693 containerd[2016]: time="2024-07-02T08:59:58.257439779Z" level=info msg="RemoveContainer for \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\"" Jul 2 08:59:58.272112 containerd[2016]: time="2024-07-02T08:59:58.270904955Z" level=info msg="RemoveContainer for \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\" returns successfully" Jul 2 08:59:58.272287 kubelet[3538]: I0702 08:59:58.271409 3538 scope.go:117] "RemoveContainer" containerID="fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b" Jul 2 08:59:58.272815 containerd[2016]: time="2024-07-02T08:59:58.272420543Z" level=error msg="ContainerStatus for \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\": not found" Jul 2 08:59:58.273683 kubelet[3538]: E0702 08:59:58.273199 3538 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\": not found" containerID="fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b" Jul 2 08:59:58.273683 kubelet[3538]: I0702 08:59:58.273245 3538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b"} err="failed to get container status \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbd9e638341ab5d3360b09d08407422e580a7492a6add7f3ce4e3ededf16162b\": not found" Jul 2 08:59:58.275696 containerd[2016]: time="2024-07-02T08:59:58.274472471Z" level=info msg="CreateContainer within sandbox \"1bbff0e054718e17f79a2c926eb6f78945780b255e8f38a1917b0e8073790c04\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7b2dd7ef886d61405cd122a44a4d35c1591e930c40c9cbd0b17d761d9f0213f3\"" Jul 2 08:59:58.278459 containerd[2016]: time="2024-07-02T08:59:58.276801923Z" level=info msg="StartContainer for \"7b2dd7ef886d61405cd122a44a4d35c1591e930c40c9cbd0b17d761d9f0213f3\"" Jul 2 08:59:58.301620 systemd[1]: cri-containerd-c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4.scope: Deactivated successfully. Jul 2 08:59:58.390063 systemd[1]: Started cri-containerd-7b2dd7ef886d61405cd122a44a4d35c1591e930c40c9cbd0b17d761d9f0213f3.scope - libcontainer container 7b2dd7ef886d61405cd122a44a4d35c1591e930c40c9cbd0b17d761d9f0213f3. Jul 2 08:59:58.428189 containerd[2016]: time="2024-07-02T08:59:58.428015496Z" level=info msg="shim disconnected" id=c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4 namespace=k8s.io Jul 2 08:59:58.428189 containerd[2016]: time="2024-07-02T08:59:58.428122680Z" level=warning msg="cleaning up after shim disconnected" id=c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4 namespace=k8s.io Jul 2 08:59:58.428189 containerd[2016]: time="2024-07-02T08:59:58.428145156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:59:58.468022 containerd[2016]: time="2024-07-02T08:59:58.467939508Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:59:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 08:59:58.472388 containerd[2016]: time="2024-07-02T08:59:58.472193436Z" level=info msg="TearDown network for sandbox \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\" successfully" Jul 2 08:59:58.472388 containerd[2016]: time="2024-07-02T08:59:58.472256868Z" level=info msg="StopPodSandbox for \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\" returns successfully" Jul 2 08:59:58.624863 kubelet[3538]: I0702 08:59:58.623691 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-lib-modules\") pod \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " Jul 2 08:59:58.624863 kubelet[3538]: I0702 08:59:58.623779 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-xtables-lock\") pod \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " Jul 2 08:59:58.624863 kubelet[3538]: I0702 08:59:58.623817 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-cni-bin-dir\") pod \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " Jul 2 08:59:58.624863 kubelet[3538]: I0702 08:59:58.623853 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-cni-log-dir\") pod \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " Jul 2 08:59:58.624863 kubelet[3538]: I0702 08:59:58.623895 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86szs\" (UniqueName: \"kubernetes.io/projected/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-kube-api-access-86szs\") pod \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " Jul 2 08:59:58.624863 kubelet[3538]: I0702 08:59:58.623931 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-policysync\") pod \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " Jul 2 08:59:58.625920 kubelet[3538]: I0702 08:59:58.623963 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-cni-net-dir\") pod \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " Jul 2 08:59:58.625920 kubelet[3538]: I0702 08:59:58.624002 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-tigera-ca-bundle\") pod \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " Jul 2 08:59:58.625920 kubelet[3538]: I0702 08:59:58.624046 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-node-certs\") pod \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " Jul 2 08:59:58.625920 kubelet[3538]: I0702 08:59:58.624081 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-var-run-calico\") pod \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " Jul 2 08:59:58.625920 kubelet[3538]: I0702 08:59:58.624116 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-var-lib-calico\") pod \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " Jul 2 08:59:58.625920 kubelet[3538]: I0702 08:59:58.624152 3538 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-flexvol-driver-host\") pod \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\" (UID: \"f49d4e14-ac84-493a-92cc-31ac6b6ae03b\") " Jul 2 08:59:58.626369 kubelet[3538]: I0702 08:59:58.624284 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "f49d4e14-ac84-493a-92cc-31ac6b6ae03b" (UID: "f49d4e14-ac84-493a-92cc-31ac6b6ae03b"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:59:58.626369 kubelet[3538]: I0702 08:59:58.624348 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f49d4e14-ac84-493a-92cc-31ac6b6ae03b" (UID: "f49d4e14-ac84-493a-92cc-31ac6b6ae03b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:59:58.626369 kubelet[3538]: I0702 08:59:58.624385 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f49d4e14-ac84-493a-92cc-31ac6b6ae03b" (UID: "f49d4e14-ac84-493a-92cc-31ac6b6ae03b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:59:58.626369 kubelet[3538]: I0702 08:59:58.624419 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "f49d4e14-ac84-493a-92cc-31ac6b6ae03b" (UID: "f49d4e14-ac84-493a-92cc-31ac6b6ae03b"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:59:58.626369 kubelet[3538]: I0702 08:59:58.624455 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "f49d4e14-ac84-493a-92cc-31ac6b6ae03b" (UID: "f49d4e14-ac84-493a-92cc-31ac6b6ae03b"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:59:58.630604 kubelet[3538]: I0702 08:59:58.629512 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f49d4e14-ac84-493a-92cc-31ac6b6ae03b" (UID: "f49d4e14-ac84-493a-92cc-31ac6b6ae03b"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:59:58.630604 kubelet[3538]: I0702 08:59:58.629590 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-policysync" (OuterVolumeSpecName: "policysync") pod "f49d4e14-ac84-493a-92cc-31ac6b6ae03b" (UID: "f49d4e14-ac84-493a-92cc-31ac6b6ae03b"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:59:58.630604 kubelet[3538]: I0702 08:59:58.629628 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "f49d4e14-ac84-493a-92cc-31ac6b6ae03b" (UID: "f49d4e14-ac84-493a-92cc-31ac6b6ae03b"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:59:58.630604 kubelet[3538]: I0702 08:59:58.629675 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "f49d4e14-ac84-493a-92cc-31ac6b6ae03b" (UID: "f49d4e14-ac84-493a-92cc-31ac6b6ae03b"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:59:58.631954 kubelet[3538]: I0702 08:59:58.631026 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "f49d4e14-ac84-493a-92cc-31ac6b6ae03b" (UID: "f49d4e14-ac84-493a-92cc-31ac6b6ae03b"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:59:58.633935 containerd[2016]: time="2024-07-02T08:59:58.632526037Z" level=info msg="StartContainer for \"7b2dd7ef886d61405cd122a44a4d35c1591e930c40c9cbd0b17d761d9f0213f3\" returns successfully" Jul 2 08:59:58.667501 kubelet[3538]: I0702 08:59:58.666222 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-kube-api-access-86szs" (OuterVolumeSpecName: "kube-api-access-86szs") pod "f49d4e14-ac84-493a-92cc-31ac6b6ae03b" (UID: "f49d4e14-ac84-493a-92cc-31ac6b6ae03b"). InnerVolumeSpecName "kube-api-access-86szs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:59:58.670891 kubelet[3538]: I0702 08:59:58.670815 3538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-node-certs" (OuterVolumeSpecName: "node-certs") pod "f49d4e14-ac84-493a-92cc-31ac6b6ae03b" (UID: "f49d4e14-ac84-493a-92cc-31ac6b6ae03b"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:59:58.677531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb-rootfs.mount: Deactivated successfully. Jul 2 08:59:58.677747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4-rootfs.mount: Deactivated successfully. Jul 2 08:59:58.677890 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4-shm.mount: Deactivated successfully. Jul 2 08:59:58.691050 systemd[1]: var-lib-kubelet-pods-f49d4e14\x2dac84\x2d493a\x2d92cc\x2d31ac6b6ae03b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d86szs.mount: Deactivated successfully. Jul 2 08:59:58.691263 systemd[1]: var-lib-kubelet-pods-f49d4e14\x2dac84\x2d493a\x2d92cc\x2d31ac6b6ae03b-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jul 2 08:59:58.727464 kubelet[3538]: I0702 08:59:58.725631 3538 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-policysync\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:58.727464 kubelet[3538]: I0702 08:59:58.725765 3538 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-cni-net-dir\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:58.727464 kubelet[3538]: I0702 08:59:58.725790 3538 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-tigera-ca-bundle\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:58.727464 kubelet[3538]: I0702 08:59:58.727034 3538 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-node-certs\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:58.727464 kubelet[3538]: I0702 08:59:58.727067 3538 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-var-run-calico\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:58.727464 kubelet[3538]: I0702 08:59:58.727126 3538 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-var-lib-calico\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:58.727464 kubelet[3538]: I0702 08:59:58.727154 3538 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-flexvol-driver-host\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:58.727464 kubelet[3538]: I0702 08:59:58.727399 3538 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-lib-modules\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:58.728124 kubelet[3538]: I0702 08:59:58.727422 3538 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-xtables-lock\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:58.728388 kubelet[3538]: I0702 08:59:58.728262 3538 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-cni-bin-dir\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:58.728388 kubelet[3538]: I0702 08:59:58.728300 3538 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-cni-log-dir\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:58.728388 kubelet[3538]: I0702 08:59:58.728350 3538 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-86szs\" (UniqueName: \"kubernetes.io/projected/f49d4e14-ac84-493a-92cc-31ac6b6ae03b-kube-api-access-86szs\") on node \"ip-172-31-26-125\" DevicePath \"\"" Jul 2 08:59:59.272881 kubelet[3538]: I0702 08:59:59.271047 3538 scope.go:117] "RemoveContainer" containerID="8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb" Jul 2 08:59:59.276820 containerd[2016]: time="2024-07-02T08:59:59.276417504Z" level=info msg="RemoveContainer for \"8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb\"" Jul 2 08:59:59.288765 containerd[2016]: time="2024-07-02T08:59:59.287168520Z" level=info msg="RemoveContainer for \"8ee39e16c508beeaa93b1c95469c82184258a3d7a8d5564eaff5f8bd9ffd2adb\" returns successfully" Jul 2 08:59:59.293275 systemd[1]: Removed slice kubepods-besteffort-podf49d4e14_ac84_493a_92cc_31ac6b6ae03b.slice - libcontainer container kubepods-besteffort-podf49d4e14_ac84_493a_92cc_31ac6b6ae03b.slice. Jul 2 08:59:59.309478 kubelet[3538]: I0702 08:59:59.309374 3538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-869f4d8664-xq8s8" podStartSLOduration=5.309351144 podStartE2EDuration="5.309351144s" podCreationTimestamp="2024-07-02 08:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:59:59.3080691 +0000 UTC m=+29.498660151" watchObservedRunningTime="2024-07-02 08:59:59.309351144 +0000 UTC m=+29.499942183" Jul 2 08:59:59.369834 kubelet[3538]: I0702 08:59:59.368143 3538 topology_manager.go:215] "Topology Admit Handler" podUID="26afb7d1-d231-4e10-a7e7-e749c9287cc1" podNamespace="calico-system" podName="calico-node-6kr5d" Jul 2 08:59:59.369834 kubelet[3538]: E0702 08:59:59.368233 3538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f49d4e14-ac84-493a-92cc-31ac6b6ae03b" containerName="flexvol-driver" Jul 2 08:59:59.369834 kubelet[3538]: I0702 08:59:59.368288 3538 memory_manager.go:354] "RemoveStaleState removing state" podUID="f49d4e14-ac84-493a-92cc-31ac6b6ae03b" containerName="flexvol-driver" Jul 2 08:59:59.384662 systemd[1]: Created slice kubepods-besteffort-pod26afb7d1_d231_4e10_a7e7_e749c9287cc1.slice - libcontainer container kubepods-besteffort-pod26afb7d1_d231_4e10_a7e7_e749c9287cc1.slice. Jul 2 08:59:59.433999 kubelet[3538]: I0702 08:59:59.433937 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/26afb7d1-d231-4e10-a7e7-e749c9287cc1-policysync\") pod \"calico-node-6kr5d\" (UID: \"26afb7d1-d231-4e10-a7e7-e749c9287cc1\") " pod="calico-system/calico-node-6kr5d" Jul 2 08:59:59.434165 kubelet[3538]: I0702 08:59:59.434011 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f59t\" (UniqueName: \"kubernetes.io/projected/26afb7d1-d231-4e10-a7e7-e749c9287cc1-kube-api-access-2f59t\") pod \"calico-node-6kr5d\" (UID: \"26afb7d1-d231-4e10-a7e7-e749c9287cc1\") " pod="calico-system/calico-node-6kr5d" Jul 2 08:59:59.434165 kubelet[3538]: I0702 08:59:59.434053 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26afb7d1-d231-4e10-a7e7-e749c9287cc1-lib-modules\") pod \"calico-node-6kr5d\" (UID: \"26afb7d1-d231-4e10-a7e7-e749c9287cc1\") " pod="calico-system/calico-node-6kr5d" Jul 2 08:59:59.434165 kubelet[3538]: I0702 08:59:59.434091 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26afb7d1-d231-4e10-a7e7-e749c9287cc1-xtables-lock\") pod \"calico-node-6kr5d\" (UID: \"26afb7d1-d231-4e10-a7e7-e749c9287cc1\") " pod="calico-system/calico-node-6kr5d" Jul 2 08:59:59.434165 kubelet[3538]: I0702 08:59:59.434128 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/26afb7d1-d231-4e10-a7e7-e749c9287cc1-flexvol-driver-host\") pod \"calico-node-6kr5d\" (UID: \"26afb7d1-d231-4e10-a7e7-e749c9287cc1\") " pod="calico-system/calico-node-6kr5d" Jul 2 08:59:59.434396 kubelet[3538]: I0702 08:59:59.434180 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26afb7d1-d231-4e10-a7e7-e749c9287cc1-var-lib-calico\") pod \"calico-node-6kr5d\" (UID: \"26afb7d1-d231-4e10-a7e7-e749c9287cc1\") " pod="calico-system/calico-node-6kr5d" Jul 2 08:59:59.434396 kubelet[3538]: I0702 08:59:59.434246 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/26afb7d1-d231-4e10-a7e7-e749c9287cc1-cni-log-dir\") pod \"calico-node-6kr5d\" (UID: \"26afb7d1-d231-4e10-a7e7-e749c9287cc1\") " pod="calico-system/calico-node-6kr5d" Jul 2 08:59:59.434396 kubelet[3538]: I0702 08:59:59.434306 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/26afb7d1-d231-4e10-a7e7-e749c9287cc1-cni-net-dir\") pod \"calico-node-6kr5d\" (UID: \"26afb7d1-d231-4e10-a7e7-e749c9287cc1\") " pod="calico-system/calico-node-6kr5d" Jul 2 08:59:59.434396 kubelet[3538]: I0702 08:59:59.434347 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26afb7d1-d231-4e10-a7e7-e749c9287cc1-tigera-ca-bundle\") pod \"calico-node-6kr5d\" (UID: \"26afb7d1-d231-4e10-a7e7-e749c9287cc1\") " pod="calico-system/calico-node-6kr5d" Jul 2 08:59:59.434396 kubelet[3538]: I0702 08:59:59.434387 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/26afb7d1-d231-4e10-a7e7-e749c9287cc1-node-certs\") pod \"calico-node-6kr5d\" (UID: \"26afb7d1-d231-4e10-a7e7-e749c9287cc1\") " pod="calico-system/calico-node-6kr5d" Jul 2 08:59:59.434666 kubelet[3538]: I0702 08:59:59.434423 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/26afb7d1-d231-4e10-a7e7-e749c9287cc1-cni-bin-dir\") pod \"calico-node-6kr5d\" (UID: \"26afb7d1-d231-4e10-a7e7-e749c9287cc1\") " pod="calico-system/calico-node-6kr5d" Jul 2 08:59:59.434666 kubelet[3538]: I0702 08:59:59.434475 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/26afb7d1-d231-4e10-a7e7-e749c9287cc1-var-run-calico\") pod \"calico-node-6kr5d\" (UID: \"26afb7d1-d231-4e10-a7e7-e749c9287cc1\") " pod="calico-system/calico-node-6kr5d" Jul 2 08:59:59.690277 containerd[2016]: time="2024-07-02T08:59:59.690129746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6kr5d,Uid:26afb7d1-d231-4e10-a7e7-e749c9287cc1,Namespace:calico-system,Attempt:0,}" Jul 2 08:59:59.743895 containerd[2016]: time="2024-07-02T08:59:59.743412735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:59:59.743895 containerd[2016]: time="2024-07-02T08:59:59.743512803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:59.743895 containerd[2016]: time="2024-07-02T08:59:59.743545575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:59:59.743895 containerd[2016]: time="2024-07-02T08:59:59.743570463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:59:59.798208 systemd[1]: Started cri-containerd-4edb9772574cf1616fd331b74f79e1707baaf8ee1ece938feb19831af32423f8.scope - libcontainer container 4edb9772574cf1616fd331b74f79e1707baaf8ee1ece938feb19831af32423f8. Jul 2 08:59:59.879698 containerd[2016]: time="2024-07-02T08:59:59.879537207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6kr5d,Uid:26afb7d1-d231-4e10-a7e7-e749c9287cc1,Namespace:calico-system,Attempt:0,} returns sandbox id \"4edb9772574cf1616fd331b74f79e1707baaf8ee1ece938feb19831af32423f8\"" Jul 2 08:59:59.887080 containerd[2016]: time="2024-07-02T08:59:59.886967403Z" level=info msg="CreateContainer within sandbox \"4edb9772574cf1616fd331b74f79e1707baaf8ee1ece938feb19831af32423f8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 08:59:59.942102 containerd[2016]: time="2024-07-02T08:59:59.941885199Z" level=info msg="CreateContainer within sandbox \"4edb9772574cf1616fd331b74f79e1707baaf8ee1ece938feb19831af32423f8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ae3c2f66d01a36e921cbecf584023e374a97435d7e03c73b2ce57d6270dcc6e8\"" Jul 2 08:59:59.944788 containerd[2016]: time="2024-07-02T08:59:59.943521351Z" level=info msg="StartContainer for \"ae3c2f66d01a36e921cbecf584023e374a97435d7e03c73b2ce57d6270dcc6e8\"" Jul 2 09:00:00.008435 systemd[1]: Started cri-containerd-ae3c2f66d01a36e921cbecf584023e374a97435d7e03c73b2ce57d6270dcc6e8.scope - libcontainer container ae3c2f66d01a36e921cbecf584023e374a97435d7e03c73b2ce57d6270dcc6e8. Jul 2 09:00:00.047557 kubelet[3538]: E0702 09:00:00.046493 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 09:00:00.055402 kubelet[3538]: I0702 09:00:00.055237 3538 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ced013d-3208-4f65-9483-96b6f97cc71c" path="/var/lib/kubelet/pods/2ced013d-3208-4f65-9483-96b6f97cc71c/volumes" Jul 2 09:00:00.058441 kubelet[3538]: I0702 09:00:00.057852 3538 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f49d4e14-ac84-493a-92cc-31ac6b6ae03b" path="/var/lib/kubelet/pods/f49d4e14-ac84-493a-92cc-31ac6b6ae03b/volumes" Jul 2 09:00:00.147808 containerd[2016]: time="2024-07-02T09:00:00.147675709Z" level=info msg="StartContainer for \"ae3c2f66d01a36e921cbecf584023e374a97435d7e03c73b2ce57d6270dcc6e8\" returns successfully" Jul 2 09:00:00.249494 systemd[1]: cri-containerd-ae3c2f66d01a36e921cbecf584023e374a97435d7e03c73b2ce57d6270dcc6e8.scope: Deactivated successfully. Jul 2 09:00:00.311100 kubelet[3538]: I0702 09:00:00.311011 3538 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 09:00:00.362075 containerd[2016]: time="2024-07-02T09:00:00.361995470Z" level=info msg="shim disconnected" id=ae3c2f66d01a36e921cbecf584023e374a97435d7e03c73b2ce57d6270dcc6e8 namespace=k8s.io Jul 2 09:00:00.363752 containerd[2016]: time="2024-07-02T09:00:00.362851598Z" level=warning msg="cleaning up after shim disconnected" id=ae3c2f66d01a36e921cbecf584023e374a97435d7e03c73b2ce57d6270dcc6e8 namespace=k8s.io Jul 2 09:00:00.363752 containerd[2016]: time="2024-07-02T09:00:00.362899406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:00.715658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae3c2f66d01a36e921cbecf584023e374a97435d7e03c73b2ce57d6270dcc6e8-rootfs.mount: Deactivated successfully. Jul 2 09:00:01.316629 containerd[2016]: time="2024-07-02T09:00:01.316560578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 09:00:02.026487 kubelet[3538]: I0702 09:00:02.025642 3538 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 09:00:02.047136 kubelet[3538]: E0702 09:00:02.046987 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 09:00:04.046680 kubelet[3538]: E0702 09:00:04.045745 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 09:00:06.046805 kubelet[3538]: E0702 09:00:06.046234 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 09:00:08.046596 kubelet[3538]: E0702 09:00:08.046099 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 09:00:10.047245 kubelet[3538]: E0702 09:00:10.046400 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 09:00:10.794330 systemd[1]: Started sshd@9-172.31.26.125:22-147.75.109.163:39084.service - OpenSSH per-connection server daemon (147.75.109.163:39084). Jul 2 09:00:10.987095 sshd[4585]: Accepted publickey for core from 147.75.109.163 port 39084 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:10.990794 sshd[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:11.002078 systemd-logind[1990]: New session 10 of user core. Jul 2 09:00:11.008338 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 09:00:11.309667 sshd[4585]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:11.318016 systemd[1]: sshd@9-172.31.26.125:22-147.75.109.163:39084.service: Deactivated successfully. Jul 2 09:00:11.323702 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 09:00:11.329426 systemd-logind[1990]: Session 10 logged out. Waiting for processes to exit. Jul 2 09:00:11.333776 systemd-logind[1990]: Removed session 10. Jul 2 09:00:11.560862 containerd[2016]: time="2024-07-02T09:00:11.560679241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:11.562569 containerd[2016]: time="2024-07-02T09:00:11.562493965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 09:00:11.564180 containerd[2016]: time="2024-07-02T09:00:11.564086053Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:11.568539 containerd[2016]: time="2024-07-02T09:00:11.568453909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:11.571332 containerd[2016]: time="2024-07-02T09:00:11.570643837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 10.253992491s" Jul 2 09:00:11.571332 containerd[2016]: time="2024-07-02T09:00:11.570769261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 09:00:11.577362 containerd[2016]: time="2024-07-02T09:00:11.577269373Z" level=info msg="CreateContainer within sandbox \"4edb9772574cf1616fd331b74f79e1707baaf8ee1ece938feb19831af32423f8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 09:00:11.603822 containerd[2016]: time="2024-07-02T09:00:11.603553177Z" level=info msg="CreateContainer within sandbox \"4edb9772574cf1616fd331b74f79e1707baaf8ee1ece938feb19831af32423f8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d39113fedc6aac1757f16f864cd3a27661c79af3c5eef96df0591e6affd5a978\"" Jul 2 09:00:11.605816 containerd[2016]: time="2024-07-02T09:00:11.605309077Z" level=info msg="StartContainer for \"d39113fedc6aac1757f16f864cd3a27661c79af3c5eef96df0591e6affd5a978\"" Jul 2 09:00:11.673033 systemd[1]: Started cri-containerd-d39113fedc6aac1757f16f864cd3a27661c79af3c5eef96df0591e6affd5a978.scope - libcontainer container d39113fedc6aac1757f16f864cd3a27661c79af3c5eef96df0591e6affd5a978. Jul 2 09:00:11.729695 containerd[2016]: time="2024-07-02T09:00:11.729471554Z" level=info msg="StartContainer for \"d39113fedc6aac1757f16f864cd3a27661c79af3c5eef96df0591e6affd5a978\" returns successfully" Jul 2 09:00:12.047158 kubelet[3538]: E0702 09:00:12.046635 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 09:00:12.978008 containerd[2016]: time="2024-07-02T09:00:12.977912152Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:00:12.983794 systemd[1]: cri-containerd-d39113fedc6aac1757f16f864cd3a27661c79af3c5eef96df0591e6affd5a978.scope: Deactivated successfully. Jul 2 09:00:13.024697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d39113fedc6aac1757f16f864cd3a27661c79af3c5eef96df0591e6affd5a978-rootfs.mount: Deactivated successfully. Jul 2 09:00:13.055787 kubelet[3538]: I0702 09:00:13.055526 3538 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 09:00:13.090928 kubelet[3538]: I0702 09:00:13.090361 3538 topology_manager.go:215] "Topology Admit Handler" podUID="791c6356-b414-490e-bf36-89d3b52e7f10" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9hlf6" Jul 2 09:00:13.090928 kubelet[3538]: I0702 09:00:13.090840 3538 topology_manager.go:215] "Topology Admit Handler" podUID="8785dba6-5d51-4a35-b2bd-16a3597800fe" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6z62b" Jul 2 09:00:13.098151 kubelet[3538]: I0702 09:00:13.097445 3538 topology_manager.go:215] "Topology Admit Handler" podUID="17d9130b-b05e-4363-bd1a-27eab10c52c9" podNamespace="calico-system" podName="calico-kube-controllers-84f95fb8c5-z2z9x" Jul 2 09:00:13.119027 systemd[1]: Created slice kubepods-burstable-pod791c6356_b414_490e_bf36_89d3b52e7f10.slice - libcontainer container kubepods-burstable-pod791c6356_b414_490e_bf36_89d3b52e7f10.slice. Jul 2 09:00:13.146321 systemd[1]: Created slice kubepods-burstable-pod8785dba6_5d51_4a35_b2bd_16a3597800fe.slice - libcontainer container kubepods-burstable-pod8785dba6_5d51_4a35_b2bd_16a3597800fe.slice. Jul 2 09:00:13.165520 systemd[1]: Created slice kubepods-besteffort-pod17d9130b_b05e_4363_bd1a_27eab10c52c9.slice - libcontainer container kubepods-besteffort-pod17d9130b_b05e_4363_bd1a_27eab10c52c9.slice. Jul 2 09:00:13.236885 kubelet[3538]: I0702 09:00:13.236651 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bflcj\" (UniqueName: \"kubernetes.io/projected/791c6356-b414-490e-bf36-89d3b52e7f10-kube-api-access-bflcj\") pod \"coredns-7db6d8ff4d-9hlf6\" (UID: \"791c6356-b414-490e-bf36-89d3b52e7f10\") " pod="kube-system/coredns-7db6d8ff4d-9hlf6" Jul 2 09:00:13.236885 kubelet[3538]: I0702 09:00:13.236751 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkkbl\" (UniqueName: \"kubernetes.io/projected/8785dba6-5d51-4a35-b2bd-16a3597800fe-kube-api-access-vkkbl\") pod \"coredns-7db6d8ff4d-6z62b\" (UID: \"8785dba6-5d51-4a35-b2bd-16a3597800fe\") " pod="kube-system/coredns-7db6d8ff4d-6z62b" Jul 2 09:00:13.236885 kubelet[3538]: I0702 09:00:13.236810 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltl2p\" (UniqueName: \"kubernetes.io/projected/17d9130b-b05e-4363-bd1a-27eab10c52c9-kube-api-access-ltl2p\") pod \"calico-kube-controllers-84f95fb8c5-z2z9x\" (UID: \"17d9130b-b05e-4363-bd1a-27eab10c52c9\") " pod="calico-system/calico-kube-controllers-84f95fb8c5-z2z9x" Jul 2 09:00:13.236885 kubelet[3538]: I0702 09:00:13.236864 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8785dba6-5d51-4a35-b2bd-16a3597800fe-config-volume\") pod \"coredns-7db6d8ff4d-6z62b\" (UID: \"8785dba6-5d51-4a35-b2bd-16a3597800fe\") " pod="kube-system/coredns-7db6d8ff4d-6z62b" Jul 2 09:00:13.237205 kubelet[3538]: I0702 09:00:13.236923 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17d9130b-b05e-4363-bd1a-27eab10c52c9-tigera-ca-bundle\") pod \"calico-kube-controllers-84f95fb8c5-z2z9x\" (UID: \"17d9130b-b05e-4363-bd1a-27eab10c52c9\") " pod="calico-system/calico-kube-controllers-84f95fb8c5-z2z9x" Jul 2 09:00:13.237205 kubelet[3538]: I0702 09:00:13.236959 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/791c6356-b414-490e-bf36-89d3b52e7f10-config-volume\") pod \"coredns-7db6d8ff4d-9hlf6\" (UID: \"791c6356-b414-490e-bf36-89d3b52e7f10\") " pod="kube-system/coredns-7db6d8ff4d-9hlf6" Jul 2 09:00:13.431457 containerd[2016]: time="2024-07-02T09:00:13.431346662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hlf6,Uid:791c6356-b414-490e-bf36-89d3b52e7f10,Namespace:kube-system,Attempt:0,}" Jul 2 09:00:13.456701 containerd[2016]: time="2024-07-02T09:00:13.456465819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6z62b,Uid:8785dba6-5d51-4a35-b2bd-16a3597800fe,Namespace:kube-system,Attempt:0,}" Jul 2 09:00:13.476675 containerd[2016]: time="2024-07-02T09:00:13.476593299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84f95fb8c5-z2z9x,Uid:17d9130b-b05e-4363-bd1a-27eab10c52c9,Namespace:calico-system,Attempt:0,}" Jul 2 09:00:13.979031 containerd[2016]: time="2024-07-02T09:00:13.978910133Z" level=info msg="shim disconnected" id=d39113fedc6aac1757f16f864cd3a27661c79af3c5eef96df0591e6affd5a978 namespace=k8s.io Jul 2 09:00:13.979031 containerd[2016]: time="2024-07-02T09:00:13.979002077Z" level=warning msg="cleaning up after shim disconnected" id=d39113fedc6aac1757f16f864cd3a27661c79af3c5eef96df0591e6affd5a978 namespace=k8s.io Jul 2 09:00:13.980141 containerd[2016]: time="2024-07-02T09:00:13.979051133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:00:14.067562 systemd[1]: Created slice kubepods-besteffort-podf75ea9cc_4b02_4191_80d2_ca7bba2a9b74.slice - libcontainer container kubepods-besteffort-podf75ea9cc_4b02_4191_80d2_ca7bba2a9b74.slice. Jul 2 09:00:14.074596 containerd[2016]: time="2024-07-02T09:00:14.074056214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbzvw,Uid:f75ea9cc-4b02-4191-80d2-ca7bba2a9b74,Namespace:calico-system,Attempt:0,}" Jul 2 09:00:14.234914 containerd[2016]: time="2024-07-02T09:00:14.233822822Z" level=error msg="Failed to destroy network for sandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.236770 containerd[2016]: time="2024-07-02T09:00:14.236140190Z" level=error msg="encountered an error cleaning up failed sandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.241741 containerd[2016]: time="2024-07-02T09:00:14.240572871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hlf6,Uid:791c6356-b414-490e-bf36-89d3b52e7f10,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.241529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de-shm.mount: Deactivated successfully. Jul 2 09:00:14.242602 kubelet[3538]: E0702 09:00:14.242331 3538 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.247260 kubelet[3538]: E0702 09:00:14.245038 3538 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9hlf6" Jul 2 09:00:14.247260 kubelet[3538]: E0702 09:00:14.245107 3538 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9hlf6" Jul 2 09:00:14.247260 kubelet[3538]: E0702 09:00:14.245188 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9hlf6_kube-system(791c6356-b414-490e-bf36-89d3b52e7f10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9hlf6_kube-system(791c6356-b414-490e-bf36-89d3b52e7f10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9hlf6" podUID="791c6356-b414-490e-bf36-89d3b52e7f10" Jul 2 09:00:14.256803 containerd[2016]: time="2024-07-02T09:00:14.256730271Z" level=error msg="Failed to destroy network for sandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.257583 containerd[2016]: time="2024-07-02T09:00:14.257527935Z" level=error msg="encountered an error cleaning up failed sandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.258772 containerd[2016]: time="2024-07-02T09:00:14.257766267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84f95fb8c5-z2z9x,Uid:17d9130b-b05e-4363-bd1a-27eab10c52c9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.261782 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521-shm.mount: Deactivated successfully. Jul 2 09:00:14.262040 kubelet[3538]: E0702 09:00:14.261919 3538 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.262147 kubelet[3538]: E0702 09:00:14.262056 3538 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84f95fb8c5-z2z9x" Jul 2 09:00:14.262147 kubelet[3538]: E0702 09:00:14.262098 3538 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84f95fb8c5-z2z9x" Jul 2 09:00:14.262267 kubelet[3538]: E0702 09:00:14.262192 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84f95fb8c5-z2z9x_calico-system(17d9130b-b05e-4363-bd1a-27eab10c52c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84f95fb8c5-z2z9x_calico-system(17d9130b-b05e-4363-bd1a-27eab10c52c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84f95fb8c5-z2z9x" podUID="17d9130b-b05e-4363-bd1a-27eab10c52c9" Jul 2 09:00:14.279552 containerd[2016]: time="2024-07-02T09:00:14.279482103Z" level=error msg="Failed to destroy network for sandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.280723 containerd[2016]: time="2024-07-02T09:00:14.280645383Z" level=error msg="encountered an error cleaning up failed sandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.281107 containerd[2016]: time="2024-07-02T09:00:14.280940583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6z62b,Uid:8785dba6-5d51-4a35-b2bd-16a3597800fe,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.281798 kubelet[3538]: E0702 09:00:14.281364 3538 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.281798 kubelet[3538]: E0702 09:00:14.281445 3538 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6z62b" Jul 2 09:00:14.281798 kubelet[3538]: E0702 09:00:14.281478 3538 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6z62b" Jul 2 09:00:14.282122 kubelet[3538]: E0702 09:00:14.281549 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6z62b_kube-system(8785dba6-5d51-4a35-b2bd-16a3597800fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6z62b_kube-system(8785dba6-5d51-4a35-b2bd-16a3597800fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6z62b" podUID="8785dba6-5d51-4a35-b2bd-16a3597800fe" Jul 2 09:00:14.313760 containerd[2016]: time="2024-07-02T09:00:14.313040703Z" level=error msg="Failed to destroy network for sandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.313760 containerd[2016]: time="2024-07-02T09:00:14.313616403Z" level=error msg="encountered an error cleaning up failed sandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.314008 containerd[2016]: time="2024-07-02T09:00:14.313696083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbzvw,Uid:f75ea9cc-4b02-4191-80d2-ca7bba2a9b74,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.315870 kubelet[3538]: E0702 09:00:14.314371 3538 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.315870 kubelet[3538]: E0702 09:00:14.314452 3538 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbzvw" Jul 2 09:00:14.315870 kubelet[3538]: E0702 09:00:14.314483 3538 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbzvw" Jul 2 09:00:14.316163 kubelet[3538]: E0702 09:00:14.314554 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbzvw_calico-system(f75ea9cc-4b02-4191-80d2-ca7bba2a9b74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbzvw_calico-system(f75ea9cc-4b02-4191-80d2-ca7bba2a9b74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 09:00:14.362253 kubelet[3538]: I0702 09:00:14.362215 3538 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:00:14.364504 containerd[2016]: time="2024-07-02T09:00:14.363364671Z" level=info msg="StopPodSandbox for \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\"" Jul 2 09:00:14.364504 containerd[2016]: time="2024-07-02T09:00:14.363791979Z" level=info msg="Ensure that sandbox 95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521 in task-service has been cleanup successfully" Jul 2 09:00:14.366423 kubelet[3538]: I0702 09:00:14.365771 3538 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:14.367688 containerd[2016]: time="2024-07-02T09:00:14.367547655Z" level=info msg="StopPodSandbox for \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\"" Jul 2 09:00:14.370992 kubelet[3538]: I0702 09:00:14.370928 3538 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:14.373310 containerd[2016]: time="2024-07-02T09:00:14.371850771Z" level=info msg="Ensure that sandbox a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe in task-service has been cleanup successfully" Jul 2 09:00:14.374509 containerd[2016]: time="2024-07-02T09:00:14.374443059Z" level=info msg="StopPodSandbox for \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\"" Jul 2 09:00:14.375526 containerd[2016]: time="2024-07-02T09:00:14.375145395Z" level=info msg="Ensure that sandbox 98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de in task-service has been cleanup successfully" Jul 2 09:00:14.391845 containerd[2016]: time="2024-07-02T09:00:14.391198479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 09:00:14.393370 kubelet[3538]: I0702 09:00:14.393271 3538 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:14.401745 containerd[2016]: time="2024-07-02T09:00:14.399157215Z" level=info msg="StopPodSandbox for \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\"" Jul 2 09:00:14.405039 containerd[2016]: time="2024-07-02T09:00:14.404982243Z" level=info msg="Ensure that sandbox c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33 in task-service has been cleanup successfully" Jul 2 09:00:14.537933 containerd[2016]: time="2024-07-02T09:00:14.537734224Z" level=error msg="StopPodSandbox for \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\" failed" error="failed to destroy network for sandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.540493 kubelet[3538]: E0702 09:00:14.538251 3538 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:00:14.540493 kubelet[3538]: E0702 09:00:14.538354 3538 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521"} Jul 2 09:00:14.540493 kubelet[3538]: E0702 09:00:14.538409 3538 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"17d9130b-b05e-4363-bd1a-27eab10c52c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:00:14.540493 kubelet[3538]: E0702 09:00:14.538446 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"17d9130b-b05e-4363-bd1a-27eab10c52c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84f95fb8c5-z2z9x" podUID="17d9130b-b05e-4363-bd1a-27eab10c52c9" Jul 2 09:00:14.545182 containerd[2016]: time="2024-07-02T09:00:14.545115412Z" level=error msg="StopPodSandbox for \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\" failed" error="failed to destroy network for sandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.546130 kubelet[3538]: E0702 09:00:14.546062 3538 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:14.546257 kubelet[3538]: E0702 09:00:14.546137 3538 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe"} Jul 2 09:00:14.546257 kubelet[3538]: E0702 09:00:14.546195 3538 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8785dba6-5d51-4a35-b2bd-16a3597800fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:00:14.546257 kubelet[3538]: E0702 09:00:14.546240 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8785dba6-5d51-4a35-b2bd-16a3597800fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6z62b" podUID="8785dba6-5d51-4a35-b2bd-16a3597800fe" Jul 2 09:00:14.554326 containerd[2016]: time="2024-07-02T09:00:14.553809064Z" level=error msg="StopPodSandbox for \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\" failed" error="failed to destroy network for sandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.554490 kubelet[3538]: E0702 09:00:14.554118 3538 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:14.554490 kubelet[3538]: E0702 09:00:14.554183 3538 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33"} Jul 2 09:00:14.554490 kubelet[3538]: E0702 09:00:14.554237 3538 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f75ea9cc-4b02-4191-80d2-ca7bba2a9b74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:00:14.554490 kubelet[3538]: E0702 09:00:14.554275 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f75ea9cc-4b02-4191-80d2-ca7bba2a9b74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbzvw" podUID="f75ea9cc-4b02-4191-80d2-ca7bba2a9b74" Jul 2 09:00:14.557566 containerd[2016]: time="2024-07-02T09:00:14.557009644Z" level=error msg="StopPodSandbox for \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\" failed" error="failed to destroy network for sandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:00:14.557703 kubelet[3538]: E0702 09:00:14.557347 3538 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:14.557703 kubelet[3538]: E0702 09:00:14.557417 3538 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de"} Jul 2 09:00:14.557703 kubelet[3538]: E0702 09:00:14.557472 3538 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"791c6356-b414-490e-bf36-89d3b52e7f10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:00:14.557703 kubelet[3538]: E0702 09:00:14.557511 3538 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"791c6356-b414-490e-bf36-89d3b52e7f10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9hlf6" podUID="791c6356-b414-490e-bf36-89d3b52e7f10" Jul 2 09:00:15.023945 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33-shm.mount: Deactivated successfully. Jul 2 09:00:15.024344 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe-shm.mount: Deactivated successfully. Jul 2 09:00:16.359869 systemd[1]: Started sshd@10-172.31.26.125:22-147.75.109.163:58542.service - OpenSSH per-connection server daemon (147.75.109.163:58542). Jul 2 09:00:16.543452 sshd[4859]: Accepted publickey for core from 147.75.109.163 port 58542 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:16.545626 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:16.558660 systemd-logind[1990]: New session 11 of user core. Jul 2 09:00:16.563364 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 09:00:16.833852 sshd[4859]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:16.845759 systemd[1]: sshd@10-172.31.26.125:22-147.75.109.163:58542.service: Deactivated successfully. Jul 2 09:00:16.849362 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 09:00:16.854863 systemd-logind[1990]: Session 11 logged out. Waiting for processes to exit. Jul 2 09:00:16.857779 systemd-logind[1990]: Removed session 11. Jul 2 09:00:21.589369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1048859550.mount: Deactivated successfully. Jul 2 09:00:21.689943 containerd[2016]: time="2024-07-02T09:00:21.689500188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:21.691411 containerd[2016]: time="2024-07-02T09:00:21.691321140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 09:00:21.693307 containerd[2016]: time="2024-07-02T09:00:21.693226416Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:21.697644 containerd[2016]: time="2024-07-02T09:00:21.697542828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:21.699485 containerd[2016]: time="2024-07-02T09:00:21.698844360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 7.304051377s" Jul 2 09:00:21.699485 containerd[2016]: time="2024-07-02T09:00:21.698907108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 09:00:21.728981 containerd[2016]: time="2024-07-02T09:00:21.723618576Z" level=info msg="CreateContainer within sandbox \"4edb9772574cf1616fd331b74f79e1707baaf8ee1ece938feb19831af32423f8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 09:00:21.755006 containerd[2016]: time="2024-07-02T09:00:21.754941252Z" level=info msg="CreateContainer within sandbox \"4edb9772574cf1616fd331b74f79e1707baaf8ee1ece938feb19831af32423f8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"93fd359745902fde283c82970b1f65e649d8fe6485efa0a8067bbb8e7880e7cb\"" Jul 2 09:00:21.756269 containerd[2016]: time="2024-07-02T09:00:21.756162948Z" level=info msg="StartContainer for \"93fd359745902fde283c82970b1f65e649d8fe6485efa0a8067bbb8e7880e7cb\"" Jul 2 09:00:21.808045 systemd[1]: Started cri-containerd-93fd359745902fde283c82970b1f65e649d8fe6485efa0a8067bbb8e7880e7cb.scope - libcontainer container 93fd359745902fde283c82970b1f65e649d8fe6485efa0a8067bbb8e7880e7cb. Jul 2 09:00:21.873183 containerd[2016]: time="2024-07-02T09:00:21.872939424Z" level=info msg="StartContainer for \"93fd359745902fde283c82970b1f65e649d8fe6485efa0a8067bbb8e7880e7cb\" returns successfully" Jul 2 09:00:21.881358 systemd[1]: Started sshd@11-172.31.26.125:22-147.75.109.163:58550.service - OpenSSH per-connection server daemon (147.75.109.163:58550). Jul 2 09:00:22.061650 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 09:00:22.061828 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 09:00:22.078390 sshd[4911]: Accepted publickey for core from 147.75.109.163 port 58550 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:22.080687 sshd[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:22.090312 systemd-logind[1990]: New session 12 of user core. Jul 2 09:00:22.100023 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 09:00:22.420654 sshd[4911]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:22.432439 systemd-logind[1990]: Session 12 logged out. Waiting for processes to exit. Jul 2 09:00:22.434536 systemd[1]: sshd@11-172.31.26.125:22-147.75.109.163:58550.service: Deactivated successfully. Jul 2 09:00:22.446084 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 09:00:22.470586 systemd-logind[1990]: Removed session 12. Jul 2 09:00:22.474183 systemd[1]: Started sshd@12-172.31.26.125:22-147.75.109.163:46542.service - OpenSSH per-connection server daemon (147.75.109.163:46542). Jul 2 09:00:22.684466 sshd[4952]: Accepted publickey for core from 147.75.109.163 port 46542 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:22.690531 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:22.707020 systemd-logind[1990]: New session 13 of user core. Jul 2 09:00:22.718851 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 09:00:23.065789 sshd[4952]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:23.074542 systemd[1]: sshd@12-172.31.26.125:22-147.75.109.163:46542.service: Deactivated successfully. Jul 2 09:00:23.081398 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 09:00:23.090623 systemd-logind[1990]: Session 13 logged out. Waiting for processes to exit. Jul 2 09:00:23.116005 systemd[1]: Started sshd@13-172.31.26.125:22-147.75.109.163:46548.service - OpenSSH per-connection server daemon (147.75.109.163:46548). Jul 2 09:00:23.119097 systemd-logind[1990]: Removed session 13. Jul 2 09:00:23.302970 sshd[4996]: Accepted publickey for core from 147.75.109.163 port 46548 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:23.306361 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:23.318243 systemd-logind[1990]: New session 14 of user core. Jul 2 09:00:23.326029 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 09:00:23.609569 sshd[4996]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:23.615984 systemd-logind[1990]: Session 14 logged out. Waiting for processes to exit. Jul 2 09:00:23.616536 systemd[1]: sshd@13-172.31.26.125:22-147.75.109.163:46548.service: Deactivated successfully. Jul 2 09:00:23.621567 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 09:00:23.626796 systemd-logind[1990]: Removed session 14. Jul 2 09:00:24.708950 (udev-worker)[4924]: Network interface NamePolicy= disabled on kernel command line. Jul 2 09:00:24.712475 systemd-networkd[1929]: vxlan.calico: Link UP Jul 2 09:00:24.712483 systemd-networkd[1929]: vxlan.calico: Gained carrier Jul 2 09:00:24.758879 (udev-worker)[4923]: Network interface NamePolicy= disabled on kernel command line. Jul 2 09:00:26.099927 systemd-networkd[1929]: vxlan.calico: Gained IPv6LL Jul 2 09:00:27.047190 containerd[2016]: time="2024-07-02T09:00:27.046528010Z" level=info msg="StopPodSandbox for \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\"" Jul 2 09:00:27.047190 containerd[2016]: time="2024-07-02T09:00:27.046606130Z" level=info msg="StopPodSandbox for \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\"" Jul 2 09:00:27.214108 kubelet[3538]: I0702 09:00:27.214006 3538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6kr5d" podStartSLOduration=7.829625781 podStartE2EDuration="28.213982755s" podCreationTimestamp="2024-07-02 08:59:59 +0000 UTC" firstStartedPulling="2024-07-02 09:00:01.316041554 +0000 UTC m=+31.506632581" lastFinishedPulling="2024-07-02 09:00:21.700398516 +0000 UTC m=+51.890989555" observedRunningTime="2024-07-02 09:00:22.513398808 +0000 UTC m=+52.703989859" watchObservedRunningTime="2024-07-02 09:00:27.213982755 +0000 UTC m=+57.404573794" Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.217 [INFO][5237] k8s.go 608: Cleaning up netns ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.217 [INFO][5237] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" iface="eth0" netns="/var/run/netns/cni-a4d5f20c-a5f6-7429-c397-e0407eaa6326" Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.218 [INFO][5237] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" iface="eth0" netns="/var/run/netns/cni-a4d5f20c-a5f6-7429-c397-e0407eaa6326" Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.218 [INFO][5237] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" iface="eth0" netns="/var/run/netns/cni-a4d5f20c-a5f6-7429-c397-e0407eaa6326" Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.218 [INFO][5237] k8s.go 615: Releasing IP address(es) ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.218 [INFO][5237] utils.go 188: Calico CNI releasing IP address ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.333 [INFO][5256] ipam_plugin.go 411: Releasing address using handleID ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" HandleID="k8s-pod-network.a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.335 [INFO][5256] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.335 [INFO][5256] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.355 [WARNING][5256] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" HandleID="k8s-pod-network.a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.355 [INFO][5256] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" HandleID="k8s-pod-network.a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.358 [INFO][5256] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:27.368747 containerd[2016]: 2024-07-02 09:00:27.362 [INFO][5237] k8s.go 621: Teardown processing complete. ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:27.369692 containerd[2016]: time="2024-07-02T09:00:27.369592756Z" level=info msg="TearDown network for sandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\" successfully" Jul 2 09:00:27.369912 containerd[2016]: time="2024-07-02T09:00:27.369687052Z" level=info msg="StopPodSandbox for \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\" returns successfully" Jul 2 09:00:27.375975 containerd[2016]: time="2024-07-02T09:00:27.375903436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6z62b,Uid:8785dba6-5d51-4a35-b2bd-16a3597800fe,Namespace:kube-system,Attempt:1,}" Jul 2 09:00:27.379311 systemd[1]: run-netns-cni\x2da4d5f20c\x2da5f6\x2d7429\x2dc397\x2de0407eaa6326.mount: Deactivated successfully. Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.231 [INFO][5248] k8s.go 608: Cleaning up netns ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.231 [INFO][5248] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" iface="eth0" netns="/var/run/netns/cni-18224d1f-a437-1410-ece8-a4e687d98099" Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.234 [INFO][5248] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" iface="eth0" netns="/var/run/netns/cni-18224d1f-a437-1410-ece8-a4e687d98099" Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.234 [INFO][5248] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" iface="eth0" netns="/var/run/netns/cni-18224d1f-a437-1410-ece8-a4e687d98099" Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.234 [INFO][5248] k8s.go 615: Releasing IP address(es) ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.234 [INFO][5248] utils.go 188: Calico CNI releasing IP address ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.343 [INFO][5260] ipam_plugin.go 411: Releasing address using handleID ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" HandleID="k8s-pod-network.98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.344 [INFO][5260] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.358 [INFO][5260] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.393 [WARNING][5260] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" HandleID="k8s-pod-network.98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.393 [INFO][5260] ipam_plugin.go 439: Releasing address using workloadID ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" HandleID="k8s-pod-network.98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.398 [INFO][5260] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:27.417029 containerd[2016]: 2024-07-02 09:00:27.410 [INFO][5248] k8s.go 621: Teardown processing complete. ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:27.421028 containerd[2016]: time="2024-07-02T09:00:27.420826588Z" level=info msg="TearDown network for sandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\" successfully" Jul 2 09:00:27.421028 containerd[2016]: time="2024-07-02T09:00:27.420881428Z" level=info msg="StopPodSandbox for \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\" returns successfully" Jul 2 09:00:27.428048 containerd[2016]: time="2024-07-02T09:00:27.423489520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hlf6,Uid:791c6356-b414-490e-bf36-89d3b52e7f10,Namespace:kube-system,Attempt:1,}" Jul 2 09:00:27.429644 systemd[1]: run-netns-cni\x2d18224d1f\x2da437\x2d1410\x2dece8\x2da4e687d98099.mount: Deactivated successfully. Jul 2 09:00:27.836857 systemd-networkd[1929]: cali180ab380623: Link UP Jul 2 09:00:27.841654 systemd-networkd[1929]: cali180ab380623: Gained carrier Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.609 [INFO][5280] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0 coredns-7db6d8ff4d- kube-system 791c6356-b414-490e-bf36-89d3b52e7f10 888 0 2024-07-02 08:59:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-125 coredns-7db6d8ff4d-9hlf6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali180ab380623 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hlf6" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-" Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.610 [INFO][5280] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hlf6" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.696 [INFO][5292] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" HandleID="k8s-pod-network.dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.738 [INFO][5292] ipam_plugin.go 264: Auto assigning IP ContainerID="dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" HandleID="k8s-pod-network.dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318d20), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-125", "pod":"coredns-7db6d8ff4d-9hlf6", "timestamp":"2024-07-02 09:00:27.696066965 +0000 UTC"}, Hostname:"ip-172-31-26-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.738 [INFO][5292] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.738 [INFO][5292] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.738 [INFO][5292] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-125' Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.743 [INFO][5292] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" host="ip-172-31-26-125" Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.753 [INFO][5292] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-125" Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.770 [INFO][5292] ipam.go 489: Trying affinity for 192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.775 [INFO][5292] ipam.go 155: Attempting to load block cidr=192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.786 [INFO][5292] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.786 [INFO][5292] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" host="ip-172-31-26-125" Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.789 [INFO][5292] ipam.go 1685: Creating new handle: k8s-pod-network.dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625 Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.797 [INFO][5292] ipam.go 1203: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" host="ip-172-31-26-125" Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.809 [INFO][5292] ipam.go 1216: Successfully claimed IPs: [192.168.63.1/26] block=192.168.63.0/26 handle="k8s-pod-network.dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" host="ip-172-31-26-125" Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.809 [INFO][5292] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.1/26] handle="k8s-pod-network.dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" host="ip-172-31-26-125" Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.809 [INFO][5292] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:27.873577 containerd[2016]: 2024-07-02 09:00:27.810 [INFO][5292] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.63.1/26] IPv6=[] ContainerID="dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" HandleID="k8s-pod-network.dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:27.875177 containerd[2016]: 2024-07-02 09:00:27.814 [INFO][5280] k8s.go 386: Populated endpoint ContainerID="dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hlf6" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"791c6356-b414-490e-bf36-89d3b52e7f10", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"", Pod:"coredns-7db6d8ff4d-9hlf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali180ab380623", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:27.875177 containerd[2016]: 2024-07-02 09:00:27.816 [INFO][5280] k8s.go 387: Calico CNI using IPs: [192.168.63.1/32] ContainerID="dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hlf6" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:27.875177 containerd[2016]: 2024-07-02 09:00:27.816 [INFO][5280] dataplane_linux.go 68: Setting the host side veth name to cali180ab380623 ContainerID="dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hlf6" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:27.875177 containerd[2016]: 2024-07-02 09:00:27.843 [INFO][5280] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hlf6" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:27.875177 containerd[2016]: 2024-07-02 09:00:27.845 [INFO][5280] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hlf6" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"791c6356-b414-490e-bf36-89d3b52e7f10", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625", Pod:"coredns-7db6d8ff4d-9hlf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali180ab380623", MAC:"5a:8a:ab:f6:43:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:27.875177 containerd[2016]: 2024-07-02 09:00:27.863 [INFO][5280] k8s.go 500: Wrote updated endpoint to datastore ContainerID="dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hlf6" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:27.972650 containerd[2016]: time="2024-07-02T09:00:27.972485407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:00:27.976979 containerd[2016]: time="2024-07-02T09:00:27.972616615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:27.976979 containerd[2016]: time="2024-07-02T09:00:27.972660967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:00:27.976979 containerd[2016]: time="2024-07-02T09:00:27.972695791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:28.051582 containerd[2016]: time="2024-07-02T09:00:28.050100147Z" level=info msg="StopPodSandbox for \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\"" Jul 2 09:00:28.051264 systemd-networkd[1929]: calib65d5ffe587: Link UP Jul 2 09:00:28.061730 containerd[2016]: time="2024-07-02T09:00:28.058606455Z" level=info msg="StopPodSandbox for \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\"" Jul 2 09:00:28.051695 systemd-networkd[1929]: calib65d5ffe587: Gained carrier Jul 2 09:00:28.083573 systemd[1]: Started cri-containerd-dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625.scope - libcontainer container dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625. Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.635 [INFO][5272] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0 coredns-7db6d8ff4d- kube-system 8785dba6-5d51-4a35-b2bd-16a3597800fe 887 0 2024-07-02 08:59:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-125 coredns-7db6d8ff4d-6z62b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib65d5ffe587 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z62b" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-" Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.635 [INFO][5272] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z62b" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.795 [INFO][5296] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" HandleID="k8s-pod-network.0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.834 [INFO][5296] ipam_plugin.go 264: Auto assigning IP ContainerID="0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" HandleID="k8s-pod-network.0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033ceb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-125", "pod":"coredns-7db6d8ff4d-6z62b", "timestamp":"2024-07-02 09:00:27.795620118 +0000 UTC"}, Hostname:"ip-172-31-26-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.835 [INFO][5296] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.835 [INFO][5296] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.835 [INFO][5296] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-125' Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.841 [INFO][5296] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" host="ip-172-31-26-125" Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.862 [INFO][5296] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-125" Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.898 [INFO][5296] ipam.go 489: Trying affinity for 192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.916 [INFO][5296] ipam.go 155: Attempting to load block cidr=192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.937 [INFO][5296] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.937 [INFO][5296] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" host="ip-172-31-26-125" Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.945 [INFO][5296] ipam.go 1685: Creating new handle: k8s-pod-network.0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393 Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.960 [INFO][5296] ipam.go 1203: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" host="ip-172-31-26-125" Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.989 [INFO][5296] ipam.go 1216: Successfully claimed IPs: [192.168.63.2/26] block=192.168.63.0/26 handle="k8s-pod-network.0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" host="ip-172-31-26-125" Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.989 [INFO][5296] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.2/26] handle="k8s-pod-network.0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" host="ip-172-31-26-125" Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.993 [INFO][5296] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:28.086497 containerd[2016]: 2024-07-02 09:00:27.993 [INFO][5296] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.63.2/26] IPv6=[] ContainerID="0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" HandleID="k8s-pod-network.0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:28.087867 containerd[2016]: 2024-07-02 09:00:28.015 [INFO][5272] k8s.go 386: Populated endpoint ContainerID="0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z62b" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8785dba6-5d51-4a35-b2bd-16a3597800fe", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"", Pod:"coredns-7db6d8ff4d-6z62b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib65d5ffe587", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:28.087867 containerd[2016]: 2024-07-02 09:00:28.019 [INFO][5272] k8s.go 387: Calico CNI using IPs: [192.168.63.2/32] ContainerID="0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z62b" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:28.087867 containerd[2016]: 2024-07-02 09:00:28.022 [INFO][5272] dataplane_linux.go 68: Setting the host side veth name to calib65d5ffe587 ContainerID="0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z62b" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:28.087867 containerd[2016]: 2024-07-02 09:00:28.029 [INFO][5272] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z62b" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:28.087867 containerd[2016]: 2024-07-02 09:00:28.030 [INFO][5272] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z62b" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8785dba6-5d51-4a35-b2bd-16a3597800fe", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393", Pod:"coredns-7db6d8ff4d-6z62b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib65d5ffe587", MAC:"e6:32:6b:8c:26:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:28.087867 containerd[2016]: 2024-07-02 09:00:28.067 [INFO][5272] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6z62b" WorkloadEndpoint="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:28.244694 containerd[2016]: time="2024-07-02T09:00:28.244544632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:00:28.249021 containerd[2016]: time="2024-07-02T09:00:28.247132528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:28.249021 containerd[2016]: time="2024-07-02T09:00:28.247227004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:00:28.249021 containerd[2016]: time="2024-07-02T09:00:28.247263136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:28.345586 systemd[1]: Started cri-containerd-0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393.scope - libcontainer container 0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393. Jul 2 09:00:28.380040 containerd[2016]: time="2024-07-02T09:00:28.377741177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hlf6,Uid:791c6356-b414-490e-bf36-89d3b52e7f10,Namespace:kube-system,Attempt:1,} returns sandbox id \"dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625\"" Jul 2 09:00:28.394421 containerd[2016]: time="2024-07-02T09:00:28.394343417Z" level=info msg="CreateContainer within sandbox \"dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:00:28.456931 containerd[2016]: time="2024-07-02T09:00:28.456051017Z" level=info msg="CreateContainer within sandbox \"dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d7a6f4f58e82e1bc3d47cdc76cea5b431c194f7c2fc447b38c598ceccd1519a\"" Jul 2 09:00:28.459905 containerd[2016]: time="2024-07-02T09:00:28.459555305Z" level=info msg="StartContainer for \"3d7a6f4f58e82e1bc3d47cdc76cea5b431c194f7c2fc447b38c598ceccd1519a\"" Jul 2 09:00:28.607277 containerd[2016]: time="2024-07-02T09:00:28.606642606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6z62b,Uid:8785dba6-5d51-4a35-b2bd-16a3597800fe,Namespace:kube-system,Attempt:1,} returns sandbox id \"0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393\"" Jul 2 09:00:28.637297 containerd[2016]: time="2024-07-02T09:00:28.637020894Z" level=info msg="CreateContainer within sandbox \"0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:00:28.640086 systemd[1]: Started cri-containerd-3d7a6f4f58e82e1bc3d47cdc76cea5b431c194f7c2fc447b38c598ceccd1519a.scope - libcontainer container 3d7a6f4f58e82e1bc3d47cdc76cea5b431c194f7c2fc447b38c598ceccd1519a. Jul 2 09:00:28.673998 systemd[1]: Started sshd@14-172.31.26.125:22-147.75.109.163:46564.service - OpenSSH per-connection server daemon (147.75.109.163:46564). Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.443 [INFO][5386] k8s.go 608: Cleaning up netns ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.445 [INFO][5386] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" iface="eth0" netns="/var/run/netns/cni-c7e7bc47-fca2-9f2e-30b1-e842618ad5b2" Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.447 [INFO][5386] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" iface="eth0" netns="/var/run/netns/cni-c7e7bc47-fca2-9f2e-30b1-e842618ad5b2" Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.449 [INFO][5386] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" iface="eth0" netns="/var/run/netns/cni-c7e7bc47-fca2-9f2e-30b1-e842618ad5b2" Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.454 [INFO][5386] k8s.go 615: Releasing IP address(es) ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.455 [INFO][5386] utils.go 188: Calico CNI releasing IP address ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.610 [INFO][5450] ipam_plugin.go 411: Releasing address using handleID ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" HandleID="k8s-pod-network.c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Workload="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.611 [INFO][5450] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.615 [INFO][5450] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.663 [WARNING][5450] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" HandleID="k8s-pod-network.c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Workload="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.663 [INFO][5450] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" HandleID="k8s-pod-network.c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Workload="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.668 [INFO][5450] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:28.694844 containerd[2016]: 2024-07-02 09:00:28.682 [INFO][5386] k8s.go 621: Teardown processing complete. ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:28.696528 containerd[2016]: time="2024-07-02T09:00:28.695608986Z" level=info msg="TearDown network for sandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\" successfully" Jul 2 09:00:28.696528 containerd[2016]: time="2024-07-02T09:00:28.695653386Z" level=info msg="StopPodSandbox for \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\" returns successfully" Jul 2 09:00:28.700914 containerd[2016]: time="2024-07-02T09:00:28.699071622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbzvw,Uid:f75ea9cc-4b02-4191-80d2-ca7bba2a9b74,Namespace:calico-system,Attempt:1,}" Jul 2 09:00:28.743440 containerd[2016]: time="2024-07-02T09:00:28.743147107Z" level=info msg="CreateContainer within sandbox \"0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b733636d9d2539470489bc005c86c04314501c57c918d4f4a0b0b051cad46596\"" Jul 2 09:00:28.746294 containerd[2016]: time="2024-07-02T09:00:28.746206327Z" level=info msg="StartContainer for \"b733636d9d2539470489bc005c86c04314501c57c918d4f4a0b0b051cad46596\"" Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.432 [INFO][5383] k8s.go 608: Cleaning up netns ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.439 [INFO][5383] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" iface="eth0" netns="/var/run/netns/cni-15c3e6c9-4980-8d1a-5fef-cc7c16c4abcc" Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.442 [INFO][5383] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" iface="eth0" netns="/var/run/netns/cni-15c3e6c9-4980-8d1a-5fef-cc7c16c4abcc" Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.442 [INFO][5383] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" iface="eth0" netns="/var/run/netns/cni-15c3e6c9-4980-8d1a-5fef-cc7c16c4abcc" Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.443 [INFO][5383] k8s.go 615: Releasing IP address(es) ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.443 [INFO][5383] utils.go 188: Calico CNI releasing IP address ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.708 [INFO][5449] ipam_plugin.go 411: Releasing address using handleID ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" HandleID="k8s-pod-network.95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Workload="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.708 [INFO][5449] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.709 [INFO][5449] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.735 [WARNING][5449] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" HandleID="k8s-pod-network.95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Workload="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.735 [INFO][5449] ipam_plugin.go 439: Releasing address using workloadID ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" HandleID="k8s-pod-network.95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Workload="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.739 [INFO][5449] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:28.760208 containerd[2016]: 2024-07-02 09:00:28.749 [INFO][5383] k8s.go 621: Teardown processing complete. ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:00:28.763148 containerd[2016]: time="2024-07-02T09:00:28.761332483Z" level=info msg="TearDown network for sandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\" successfully" Jul 2 09:00:28.763148 containerd[2016]: time="2024-07-02T09:00:28.761504623Z" level=info msg="StopPodSandbox for \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\" returns successfully" Jul 2 09:00:28.764068 containerd[2016]: time="2024-07-02T09:00:28.763994215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84f95fb8c5-z2z9x,Uid:17d9130b-b05e-4363-bd1a-27eab10c52c9,Namespace:calico-system,Attempt:1,}" Jul 2 09:00:28.846628 containerd[2016]: time="2024-07-02T09:00:28.846564115Z" level=info msg="StartContainer for \"3d7a6f4f58e82e1bc3d47cdc76cea5b431c194f7c2fc447b38c598ceccd1519a\" returns successfully" Jul 2 09:00:28.884915 sshd[5491]: Accepted publickey for core from 147.75.109.163 port 46564 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:28.889896 sshd[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:28.912785 systemd-logind[1990]: New session 15 of user core. Jul 2 09:00:28.930039 systemd[1]: Started cri-containerd-b733636d9d2539470489bc005c86c04314501c57c918d4f4a0b0b051cad46596.scope - libcontainer container b733636d9d2539470489bc005c86c04314501c57c918d4f4a0b0b051cad46596. Jul 2 09:00:28.931864 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 09:00:29.141176 containerd[2016]: time="2024-07-02T09:00:29.137918153Z" level=info msg="StartContainer for \"b733636d9d2539470489bc005c86c04314501c57c918d4f4a0b0b051cad46596\" returns successfully" Jul 2 09:00:29.325109 sshd[5491]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:29.334122 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 09:00:29.336178 systemd[1]: sshd@14-172.31.26.125:22-147.75.109.163:46564.service: Deactivated successfully. Jul 2 09:00:29.345787 systemd-logind[1990]: Session 15 logged out. Waiting for processes to exit. Jul 2 09:00:29.349409 systemd-logind[1990]: Removed session 15. Jul 2 09:00:29.386564 systemd[1]: run-netns-cni\x2dc7e7bc47\x2dfca2\x2d9f2e\x2d30b1\x2de842618ad5b2.mount: Deactivated successfully. Jul 2 09:00:29.387167 systemd[1]: run-netns-cni\x2d15c3e6c9\x2d4980\x2d8d1a\x2d5fef\x2dcc7c16c4abcc.mount: Deactivated successfully. Jul 2 09:00:29.419046 systemd-networkd[1929]: cali8fe81971d77: Link UP Jul 2 09:00:29.455227 systemd-networkd[1929]: cali8fe81971d77: Gained carrier Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:28.948 [INFO][5501] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0 csi-node-driver- calico-system f75ea9cc-4b02-4191-80d2-ca7bba2a9b74 903 0 2024-07-02 08:59:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-26-125 csi-node-driver-dbzvw eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali8fe81971d77 [] []}} ContainerID="745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" Namespace="calico-system" Pod="csi-node-driver-dbzvw" WorkloadEndpoint="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-" Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:28.950 [INFO][5501] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" Namespace="calico-system" Pod="csi-node-driver-dbzvw" WorkloadEndpoint="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.066 [INFO][5558] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" HandleID="k8s-pod-network.745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" Workload="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.192 [INFO][5558] ipam_plugin.go 264: Auto assigning IP ContainerID="745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" HandleID="k8s-pod-network.745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" Workload="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c27c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-125", "pod":"csi-node-driver-dbzvw", "timestamp":"2024-07-02 09:00:29.066046228 +0000 UTC"}, Hostname:"ip-172-31-26-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.192 [INFO][5558] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.192 [INFO][5558] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.192 [INFO][5558] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-125' Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.206 [INFO][5558] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" host="ip-172-31-26-125" Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.242 [INFO][5558] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-125" Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.286 [INFO][5558] ipam.go 489: Trying affinity for 192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.300 [INFO][5558] ipam.go 155: Attempting to load block cidr=192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.317 [INFO][5558] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.317 [INFO][5558] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" host="ip-172-31-26-125" Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.323 [INFO][5558] ipam.go 1685: Creating new handle: k8s-pod-network.745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18 Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.354 [INFO][5558] ipam.go 1203: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" host="ip-172-31-26-125" Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.400 [INFO][5558] ipam.go 1216: Successfully claimed IPs: [192.168.63.3/26] block=192.168.63.0/26 handle="k8s-pod-network.745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" host="ip-172-31-26-125" Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.400 [INFO][5558] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.3/26] handle="k8s-pod-network.745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" host="ip-172-31-26-125" Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.400 [INFO][5558] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:29.501893 containerd[2016]: 2024-07-02 09:00:29.400 [INFO][5558] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.63.3/26] IPv6=[] ContainerID="745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" HandleID="k8s-pod-network.745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" Workload="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:29.503560 containerd[2016]: 2024-07-02 09:00:29.408 [INFO][5501] k8s.go 386: Populated endpoint ContainerID="745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" Namespace="calico-system" Pod="csi-node-driver-dbzvw" WorkloadEndpoint="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f75ea9cc-4b02-4191-80d2-ca7bba2a9b74", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"", Pod:"csi-node-driver-dbzvw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8fe81971d77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:29.503560 containerd[2016]: 2024-07-02 09:00:29.409 [INFO][5501] k8s.go 387: Calico CNI using IPs: [192.168.63.3/32] ContainerID="745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" Namespace="calico-system" Pod="csi-node-driver-dbzvw" WorkloadEndpoint="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:29.503560 containerd[2016]: 2024-07-02 09:00:29.409 [INFO][5501] dataplane_linux.go 68: Setting the host side veth name to cali8fe81971d77 ContainerID="745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" Namespace="calico-system" Pod="csi-node-driver-dbzvw" WorkloadEndpoint="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:29.503560 containerd[2016]: 2024-07-02 09:00:29.422 [INFO][5501] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" Namespace="calico-system" Pod="csi-node-driver-dbzvw" WorkloadEndpoint="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:29.503560 containerd[2016]: 2024-07-02 09:00:29.423 [INFO][5501] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" Namespace="calico-system" Pod="csi-node-driver-dbzvw" WorkloadEndpoint="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f75ea9cc-4b02-4191-80d2-ca7bba2a9b74", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18", Pod:"csi-node-driver-dbzvw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8fe81971d77", MAC:"ce:f2:f4:5a:ef:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:29.503560 containerd[2016]: 2024-07-02 09:00:29.493 [INFO][5501] k8s.go 500: Wrote updated endpoint to datastore ContainerID="745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18" Namespace="calico-system" Pod="csi-node-driver-dbzvw" WorkloadEndpoint="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:29.620015 systemd-networkd[1929]: cali180ab380623: Gained IPv6LL Jul 2 09:00:29.625939 containerd[2016]: time="2024-07-02T09:00:29.622532059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:00:29.625939 containerd[2016]: time="2024-07-02T09:00:29.622641727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:29.625939 containerd[2016]: time="2024-07-02T09:00:29.624813487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:00:29.625939 containerd[2016]: time="2024-07-02T09:00:29.624953743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:29.645056 systemd-networkd[1929]: cali7c53ecfe562: Link UP Jul 2 09:00:29.646123 systemd-networkd[1929]: cali7c53ecfe562: Gained carrier Jul 2 09:00:29.663003 kubelet[3538]: I0702 09:00:29.661041 3538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9hlf6" podStartSLOduration=45.661018435 podStartE2EDuration="45.661018435s" podCreationTimestamp="2024-07-02 08:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:00:29.660226519 +0000 UTC m=+59.850817570" watchObservedRunningTime="2024-07-02 09:00:29.661018435 +0000 UTC m=+59.851609462" Jul 2 09:00:29.663003 kubelet[3538]: I0702 09:00:29.661197 3538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6z62b" podStartSLOduration=45.661186495 podStartE2EDuration="45.661186495s" podCreationTimestamp="2024-07-02 08:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:00:29.594828991 +0000 UTC m=+59.785420030" watchObservedRunningTime="2024-07-02 09:00:29.661186495 +0000 UTC m=+59.851777546" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.044 [INFO][5521] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0 calico-kube-controllers-84f95fb8c5- calico-system 17d9130b-b05e-4363-bd1a-27eab10c52c9 901 0 2024-07-02 08:59:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84f95fb8c5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-125 calico-kube-controllers-84f95fb8c5-z2z9x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7c53ecfe562 [] []}} ContainerID="1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" Namespace="calico-system" Pod="calico-kube-controllers-84f95fb8c5-z2z9x" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.048 [INFO][5521] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" Namespace="calico-system" Pod="calico-kube-controllers-84f95fb8c5-z2z9x" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.294 [INFO][5582] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" HandleID="k8s-pod-network.1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" Workload="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.343 [INFO][5582] ipam_plugin.go 264: Auto assigning IP ContainerID="1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" HandleID="k8s-pod-network.1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" Workload="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000308150), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-125", "pod":"calico-kube-controllers-84f95fb8c5-z2z9x", "timestamp":"2024-07-02 09:00:29.294797981 +0000 UTC"}, Hostname:"ip-172-31-26-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.344 [INFO][5582] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.402 [INFO][5582] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.403 [INFO][5582] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-125' Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.435 [INFO][5582] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" host="ip-172-31-26-125" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.493 [INFO][5582] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-125" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.524 [INFO][5582] ipam.go 489: Trying affinity for 192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.535 [INFO][5582] ipam.go 155: Attempting to load block cidr=192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.545 [INFO][5582] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.550 [INFO][5582] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" host="ip-172-31-26-125" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.557 [INFO][5582] ipam.go 1685: Creating new handle: k8s-pod-network.1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.595 [INFO][5582] ipam.go 1203: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" host="ip-172-31-26-125" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.614 [INFO][5582] ipam.go 1216: Successfully claimed IPs: [192.168.63.4/26] block=192.168.63.0/26 handle="k8s-pod-network.1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" host="ip-172-31-26-125" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.616 [INFO][5582] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.4/26] handle="k8s-pod-network.1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" host="ip-172-31-26-125" Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.617 [INFO][5582] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:29.706332 containerd[2016]: 2024-07-02 09:00:29.617 [INFO][5582] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.63.4/26] IPv6=[] ContainerID="1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" HandleID="k8s-pod-network.1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" Workload="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:00:29.707509 containerd[2016]: 2024-07-02 09:00:29.626 [INFO][5521] k8s.go 386: Populated endpoint ContainerID="1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" Namespace="calico-system" Pod="calico-kube-controllers-84f95fb8c5-z2z9x" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0", GenerateName:"calico-kube-controllers-84f95fb8c5-", Namespace:"calico-system", SelfLink:"", UID:"17d9130b-b05e-4363-bd1a-27eab10c52c9", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84f95fb8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"", Pod:"calico-kube-controllers-84f95fb8c5-z2z9x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c53ecfe562", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:29.707509 containerd[2016]: 2024-07-02 09:00:29.628 [INFO][5521] k8s.go 387: Calico CNI using IPs: [192.168.63.4/32] ContainerID="1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" Namespace="calico-system" Pod="calico-kube-controllers-84f95fb8c5-z2z9x" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:00:29.707509 containerd[2016]: 2024-07-02 09:00:29.628 [INFO][5521] dataplane_linux.go 68: Setting the host side veth name to cali7c53ecfe562 ContainerID="1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" Namespace="calico-system" Pod="calico-kube-controllers-84f95fb8c5-z2z9x" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:00:29.707509 containerd[2016]: 2024-07-02 09:00:29.650 [INFO][5521] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" Namespace="calico-system" Pod="calico-kube-controllers-84f95fb8c5-z2z9x" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:00:29.707509 containerd[2016]: 2024-07-02 09:00:29.652 [INFO][5521] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" Namespace="calico-system" Pod="calico-kube-controllers-84f95fb8c5-z2z9x" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0", GenerateName:"calico-kube-controllers-84f95fb8c5-", Namespace:"calico-system", SelfLink:"", UID:"17d9130b-b05e-4363-bd1a-27eab10c52c9", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84f95fb8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da", Pod:"calico-kube-controllers-84f95fb8c5-z2z9x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c53ecfe562", MAC:"06:b8:80:d1:b9:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:29.707509 containerd[2016]: 2024-07-02 09:00:29.694 [INFO][5521] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da" Namespace="calico-system" Pod="calico-kube-controllers-84f95fb8c5-z2z9x" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:00:29.759148 systemd[1]: Started cri-containerd-745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18.scope - libcontainer container 745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18. Jul 2 09:00:29.803750 containerd[2016]: time="2024-07-02T09:00:29.803264612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:00:29.803750 containerd[2016]: time="2024-07-02T09:00:29.803388224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:29.803750 containerd[2016]: time="2024-07-02T09:00:29.803438300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:00:29.806305 containerd[2016]: time="2024-07-02T09:00:29.805559348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:29.876217 systemd[1]: Started cri-containerd-1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da.scope - libcontainer container 1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da. Jul 2 09:00:29.939976 systemd-networkd[1929]: calib65d5ffe587: Gained IPv6LL Jul 2 09:00:29.957867 containerd[2016]: time="2024-07-02T09:00:29.957618705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbzvw,Uid:f75ea9cc-4b02-4191-80d2-ca7bba2a9b74,Namespace:calico-system,Attempt:1,} returns sandbox id \"745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18\"" Jul 2 09:00:29.966675 containerd[2016]: time="2024-07-02T09:00:29.965423781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 09:00:30.091587 containerd[2016]: time="2024-07-02T09:00:30.091523909Z" level=info msg="StopPodSandbox for \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\"" Jul 2 09:00:30.093260 containerd[2016]: time="2024-07-02T09:00:30.093121685Z" level=info msg="TearDown network for sandbox \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\" successfully" Jul 2 09:00:30.093260 containerd[2016]: time="2024-07-02T09:00:30.093238553Z" level=info msg="StopPodSandbox for \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\" returns successfully" Jul 2 09:00:30.095582 containerd[2016]: time="2024-07-02T09:00:30.095493845Z" level=info msg="RemovePodSandbox for \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\"" Jul 2 09:00:30.096494 containerd[2016]: time="2024-07-02T09:00:30.096110537Z" level=info msg="Forcibly stopping sandbox \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\"" Jul 2 09:00:30.097188 containerd[2016]: time="2024-07-02T09:00:30.097130261Z" level=info msg="TearDown network for sandbox \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\" successfully" Jul 2 09:00:30.115076 containerd[2016]: time="2024-07-02T09:00:30.114916217Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:00:30.115415 containerd[2016]: time="2024-07-02T09:00:30.115110449Z" level=info msg="RemovePodSandbox \"c5004d1ff119c305d6e376984e8ae161cda6e5ad8214e63df27e8d85f97c41f4\" returns successfully" Jul 2 09:00:30.119848 containerd[2016]: time="2024-07-02T09:00:30.117277745Z" level=info msg="StopPodSandbox for \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\"" Jul 2 09:00:30.119848 containerd[2016]: time="2024-07-02T09:00:30.117422225Z" level=info msg="TearDown network for sandbox \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\" successfully" Jul 2 09:00:30.119848 containerd[2016]: time="2024-07-02T09:00:30.117484769Z" level=info msg="StopPodSandbox for \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\" returns successfully" Jul 2 09:00:30.120157 containerd[2016]: time="2024-07-02T09:00:30.120097913Z" level=info msg="RemovePodSandbox for \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\"" Jul 2 09:00:30.120227 containerd[2016]: time="2024-07-02T09:00:30.120159653Z" level=info msg="Forcibly stopping sandbox \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\"" Jul 2 09:00:30.120335 containerd[2016]: time="2024-07-02T09:00:30.120294269Z" level=info msg="TearDown network for sandbox \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\" successfully" Jul 2 09:00:30.128988 containerd[2016]: time="2024-07-02T09:00:30.128771861Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:00:30.128988 containerd[2016]: time="2024-07-02T09:00:30.128868365Z" level=info msg="RemovePodSandbox \"6cfd9a23c9686802a6ace3a24e4016adb654de139a7ad4085265dc75ddbd9944\" returns successfully" Jul 2 09:00:30.131748 containerd[2016]: time="2024-07-02T09:00:30.131607797Z" level=info msg="StopPodSandbox for \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\"" Jul 2 09:00:30.353567 containerd[2016]: time="2024-07-02T09:00:30.352775119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84f95fb8c5-z2z9x,Uid:17d9130b-b05e-4363-bd1a-27eab10c52c9,Namespace:calico-system,Attempt:1,} returns sandbox id \"1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da\"" Jul 2 09:00:30.425826 containerd[2016]: 2024-07-02 09:00:30.256 [WARNING][5714] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8785dba6-5d51-4a35-b2bd-16a3597800fe", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393", Pod:"coredns-7db6d8ff4d-6z62b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib65d5ffe587", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:30.425826 containerd[2016]: 2024-07-02 09:00:30.257 [INFO][5714] k8s.go 608: Cleaning up netns ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:30.425826 containerd[2016]: 2024-07-02 09:00:30.257 [INFO][5714] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" iface="eth0" netns="" Jul 2 09:00:30.425826 containerd[2016]: 2024-07-02 09:00:30.263 [INFO][5714] k8s.go 615: Releasing IP address(es) ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:30.425826 containerd[2016]: 2024-07-02 09:00:30.264 [INFO][5714] utils.go 188: Calico CNI releasing IP address ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:30.425826 containerd[2016]: 2024-07-02 09:00:30.378 [INFO][5728] ipam_plugin.go 411: Releasing address using handleID ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" HandleID="k8s-pod-network.a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:30.425826 containerd[2016]: 2024-07-02 09:00:30.378 [INFO][5728] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:30.425826 containerd[2016]: 2024-07-02 09:00:30.378 [INFO][5728] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:30.425826 containerd[2016]: 2024-07-02 09:00:30.407 [WARNING][5728] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" HandleID="k8s-pod-network.a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:30.425826 containerd[2016]: 2024-07-02 09:00:30.407 [INFO][5728] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" HandleID="k8s-pod-network.a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:30.425826 containerd[2016]: 2024-07-02 09:00:30.415 [INFO][5728] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:30.425826 containerd[2016]: 2024-07-02 09:00:30.420 [INFO][5714] k8s.go 621: Teardown processing complete. ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:30.426764 containerd[2016]: time="2024-07-02T09:00:30.425886571Z" level=info msg="TearDown network for sandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\" successfully" Jul 2 09:00:30.426764 containerd[2016]: time="2024-07-02T09:00:30.425927695Z" level=info msg="StopPodSandbox for \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\" returns successfully" Jul 2 09:00:30.429036 containerd[2016]: time="2024-07-02T09:00:30.428985331Z" level=info msg="RemovePodSandbox for \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\"" Jul 2 09:00:30.430811 containerd[2016]: time="2024-07-02T09:00:30.429448627Z" level=info msg="Forcibly stopping sandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\"" Jul 2 09:00:30.729892 containerd[2016]: 2024-07-02 09:00:30.540 [WARNING][5755] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8785dba6-5d51-4a35-b2bd-16a3597800fe", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"0775f26192ee69a279189744156fcf593e4ccea006ef938895a861194baa4393", Pod:"coredns-7db6d8ff4d-6z62b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib65d5ffe587", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:30.729892 containerd[2016]: 2024-07-02 09:00:30.540 [INFO][5755] k8s.go 608: Cleaning up netns ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:30.729892 containerd[2016]: 2024-07-02 09:00:30.540 [INFO][5755] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" iface="eth0" netns="" Jul 2 09:00:30.729892 containerd[2016]: 2024-07-02 09:00:30.540 [INFO][5755] k8s.go 615: Releasing IP address(es) ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:30.729892 containerd[2016]: 2024-07-02 09:00:30.540 [INFO][5755] utils.go 188: Calico CNI releasing IP address ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:30.729892 containerd[2016]: 2024-07-02 09:00:30.698 [INFO][5763] ipam_plugin.go 411: Releasing address using handleID ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" HandleID="k8s-pod-network.a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:30.729892 containerd[2016]: 2024-07-02 09:00:30.699 [INFO][5763] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:30.729892 containerd[2016]: 2024-07-02 09:00:30.699 [INFO][5763] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:30.729892 containerd[2016]: 2024-07-02 09:00:30.716 [WARNING][5763] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" HandleID="k8s-pod-network.a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:30.729892 containerd[2016]: 2024-07-02 09:00:30.717 [INFO][5763] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" HandleID="k8s-pod-network.a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--6z62b-eth0" Jul 2 09:00:30.729892 containerd[2016]: 2024-07-02 09:00:30.721 [INFO][5763] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:30.729892 containerd[2016]: 2024-07-02 09:00:30.725 [INFO][5755] k8s.go 621: Teardown processing complete. ContainerID="a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe" Jul 2 09:00:30.733987 containerd[2016]: time="2024-07-02T09:00:30.731849744Z" level=info msg="TearDown network for sandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\" successfully" Jul 2 09:00:30.740930 containerd[2016]: time="2024-07-02T09:00:30.740805860Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:00:30.741268 containerd[2016]: time="2024-07-02T09:00:30.741221012Z" level=info msg="RemovePodSandbox \"a37a1156e9ee259b798f931baadd6b3b18669626e6066a9f6f25433945b3e1fe\" returns successfully" Jul 2 09:00:30.742260 containerd[2016]: time="2024-07-02T09:00:30.742203812Z" level=info msg="StopPodSandbox for \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\"" Jul 2 09:00:30.835976 systemd-networkd[1929]: cali8fe81971d77: Gained IPv6LL Jul 2 09:00:30.927966 containerd[2016]: 2024-07-02 09:00:30.829 [WARNING][5782] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"791c6356-b414-490e-bf36-89d3b52e7f10", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625", Pod:"coredns-7db6d8ff4d-9hlf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali180ab380623", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:30.927966 containerd[2016]: 2024-07-02 09:00:30.830 [INFO][5782] k8s.go 608: Cleaning up netns ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:30.927966 containerd[2016]: 2024-07-02 09:00:30.830 [INFO][5782] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" iface="eth0" netns="" Jul 2 09:00:30.927966 containerd[2016]: 2024-07-02 09:00:30.830 [INFO][5782] k8s.go 615: Releasing IP address(es) ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:30.927966 containerd[2016]: 2024-07-02 09:00:30.830 [INFO][5782] utils.go 188: Calico CNI releasing IP address ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:30.927966 containerd[2016]: 2024-07-02 09:00:30.890 [INFO][5788] ipam_plugin.go 411: Releasing address using handleID ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" HandleID="k8s-pod-network.98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:30.927966 containerd[2016]: 2024-07-02 09:00:30.890 [INFO][5788] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:30.927966 containerd[2016]: 2024-07-02 09:00:30.891 [INFO][5788] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:30.927966 containerd[2016]: 2024-07-02 09:00:30.911 [WARNING][5788] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" HandleID="k8s-pod-network.98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:30.927966 containerd[2016]: 2024-07-02 09:00:30.912 [INFO][5788] ipam_plugin.go 439: Releasing address using workloadID ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" HandleID="k8s-pod-network.98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:30.927966 containerd[2016]: 2024-07-02 09:00:30.916 [INFO][5788] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:30.927966 containerd[2016]: 2024-07-02 09:00:30.924 [INFO][5782] k8s.go 621: Teardown processing complete. ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:30.930169 containerd[2016]: time="2024-07-02T09:00:30.928052397Z" level=info msg="TearDown network for sandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\" successfully" Jul 2 09:00:30.930169 containerd[2016]: time="2024-07-02T09:00:30.928092333Z" level=info msg="StopPodSandbox for \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\" returns successfully" Jul 2 09:00:30.930169 containerd[2016]: time="2024-07-02T09:00:30.929213421Z" level=info msg="RemovePodSandbox for \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\"" Jul 2 09:00:30.930169 containerd[2016]: time="2024-07-02T09:00:30.929264769Z" level=info msg="Forcibly stopping sandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\"" Jul 2 09:00:30.964350 systemd-networkd[1929]: cali7c53ecfe562: Gained IPv6LL Jul 2 09:00:31.106210 containerd[2016]: 2024-07-02 09:00:31.018 [WARNING][5806] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"791c6356-b414-490e-bf36-89d3b52e7f10", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"dbf1954210785e94baca802e1f9813a468fafc837d008eeb2fb892ab93c8f625", Pod:"coredns-7db6d8ff4d-9hlf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali180ab380623", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:31.106210 containerd[2016]: 2024-07-02 09:00:31.019 [INFO][5806] k8s.go 608: Cleaning up netns ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:31.106210 containerd[2016]: 2024-07-02 09:00:31.019 [INFO][5806] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" iface="eth0" netns="" Jul 2 09:00:31.106210 containerd[2016]: 2024-07-02 09:00:31.020 [INFO][5806] k8s.go 615: Releasing IP address(es) ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:31.106210 containerd[2016]: 2024-07-02 09:00:31.020 [INFO][5806] utils.go 188: Calico CNI releasing IP address ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:31.106210 containerd[2016]: 2024-07-02 09:00:31.079 [INFO][5812] ipam_plugin.go 411: Releasing address using handleID ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" HandleID="k8s-pod-network.98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:31.106210 containerd[2016]: 2024-07-02 09:00:31.079 [INFO][5812] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:31.106210 containerd[2016]: 2024-07-02 09:00:31.079 [INFO][5812] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:31.106210 containerd[2016]: 2024-07-02 09:00:31.098 [WARNING][5812] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" HandleID="k8s-pod-network.98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:31.106210 containerd[2016]: 2024-07-02 09:00:31.098 [INFO][5812] ipam_plugin.go 439: Releasing address using workloadID ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" HandleID="k8s-pod-network.98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Workload="ip--172--31--26--125-k8s-coredns--7db6d8ff4d--9hlf6-eth0" Jul 2 09:00:31.106210 containerd[2016]: 2024-07-02 09:00:31.100 [INFO][5812] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:31.106210 containerd[2016]: 2024-07-02 09:00:31.103 [INFO][5806] k8s.go 621: Teardown processing complete. ContainerID="98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de" Jul 2 09:00:31.106210 containerd[2016]: time="2024-07-02T09:00:31.106161006Z" level=info msg="TearDown network for sandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\" successfully" Jul 2 09:00:31.113955 containerd[2016]: time="2024-07-02T09:00:31.113885682Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:00:31.114265 containerd[2016]: time="2024-07-02T09:00:31.114010914Z" level=info msg="RemovePodSandbox \"98559a168a952972698a05b240d3ffa7e624d8354377b1723adc04677f8a11de\" returns successfully" Jul 2 09:00:31.115931 containerd[2016]: time="2024-07-02T09:00:31.115855854Z" level=info msg="StopPodSandbox for \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\"" Jul 2 09:00:31.342158 containerd[2016]: 2024-07-02 09:00:31.211 [WARNING][5830] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f75ea9cc-4b02-4191-80d2-ca7bba2a9b74", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18", Pod:"csi-node-driver-dbzvw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8fe81971d77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:31.342158 containerd[2016]: 2024-07-02 09:00:31.212 [INFO][5830] k8s.go 608: Cleaning up netns ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:31.342158 containerd[2016]: 2024-07-02 09:00:31.213 [INFO][5830] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" iface="eth0" netns="" Jul 2 09:00:31.342158 containerd[2016]: 2024-07-02 09:00:31.213 [INFO][5830] k8s.go 615: Releasing IP address(es) ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:31.342158 containerd[2016]: 2024-07-02 09:00:31.213 [INFO][5830] utils.go 188: Calico CNI releasing IP address ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:31.342158 containerd[2016]: 2024-07-02 09:00:31.301 [INFO][5836] ipam_plugin.go 411: Releasing address using handleID ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" HandleID="k8s-pod-network.c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Workload="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:31.342158 containerd[2016]: 2024-07-02 09:00:31.301 [INFO][5836] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:31.342158 containerd[2016]: 2024-07-02 09:00:31.301 [INFO][5836] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:31.342158 containerd[2016]: 2024-07-02 09:00:31.325 [WARNING][5836] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" HandleID="k8s-pod-network.c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Workload="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:31.342158 containerd[2016]: 2024-07-02 09:00:31.325 [INFO][5836] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" HandleID="k8s-pod-network.c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Workload="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:31.342158 containerd[2016]: 2024-07-02 09:00:31.331 [INFO][5836] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:31.342158 containerd[2016]: 2024-07-02 09:00:31.339 [INFO][5830] k8s.go 621: Teardown processing complete. ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:31.342158 containerd[2016]: time="2024-07-02T09:00:31.342128347Z" level=info msg="TearDown network for sandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\" successfully" Jul 2 09:00:31.342158 containerd[2016]: time="2024-07-02T09:00:31.342167251Z" level=info msg="StopPodSandbox for \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\" returns successfully" Jul 2 09:00:31.343373 containerd[2016]: time="2024-07-02T09:00:31.343220611Z" level=info msg="RemovePodSandbox for \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\"" Jul 2 09:00:31.343686 containerd[2016]: time="2024-07-02T09:00:31.343500463Z" level=info msg="Forcibly stopping sandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\"" Jul 2 09:00:31.570826 containerd[2016]: 2024-07-02 09:00:31.451 [WARNING][5855] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f75ea9cc-4b02-4191-80d2-ca7bba2a9b74", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18", Pod:"csi-node-driver-dbzvw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.63.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8fe81971d77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:31.570826 containerd[2016]: 2024-07-02 09:00:31.452 [INFO][5855] k8s.go 608: Cleaning up netns ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:31.570826 containerd[2016]: 2024-07-02 09:00:31.452 [INFO][5855] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" iface="eth0" netns="" Jul 2 09:00:31.570826 containerd[2016]: 2024-07-02 09:00:31.452 [INFO][5855] k8s.go 615: Releasing IP address(es) ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:31.570826 containerd[2016]: 2024-07-02 09:00:31.453 [INFO][5855] utils.go 188: Calico CNI releasing IP address ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:31.570826 containerd[2016]: 2024-07-02 09:00:31.540 [INFO][5861] ipam_plugin.go 411: Releasing address using handleID ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" HandleID="k8s-pod-network.c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Workload="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:31.570826 containerd[2016]: 2024-07-02 09:00:31.540 [INFO][5861] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:31.570826 containerd[2016]: 2024-07-02 09:00:31.540 [INFO][5861] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:31.570826 containerd[2016]: 2024-07-02 09:00:31.561 [WARNING][5861] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" HandleID="k8s-pod-network.c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Workload="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:31.570826 containerd[2016]: 2024-07-02 09:00:31.561 [INFO][5861] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" HandleID="k8s-pod-network.c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Workload="ip--172--31--26--125-k8s-csi--node--driver--dbzvw-eth0" Jul 2 09:00:31.570826 containerd[2016]: 2024-07-02 09:00:31.564 [INFO][5861] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:31.570826 containerd[2016]: 2024-07-02 09:00:31.567 [INFO][5855] k8s.go 621: Teardown processing complete. ContainerID="c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33" Jul 2 09:00:31.574895 containerd[2016]: time="2024-07-02T09:00:31.571539537Z" level=info msg="TearDown network for sandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\" successfully" Jul 2 09:00:31.580638 containerd[2016]: time="2024-07-02T09:00:31.579975261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:00:31.580638 containerd[2016]: time="2024-07-02T09:00:31.580068057Z" level=info msg="RemovePodSandbox \"c5af17198c9c0352619cdfff4f0ac1ed54312bf1d6428e930e18142ef3a02a33\" returns successfully" Jul 2 09:00:32.223989 containerd[2016]: time="2024-07-02T09:00:32.223892912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:32.226638 containerd[2016]: time="2024-07-02T09:00:32.226571336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 09:00:32.228731 containerd[2016]: time="2024-07-02T09:00:32.228666500Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:32.233530 containerd[2016]: time="2024-07-02T09:00:32.233401616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:32.236063 containerd[2016]: time="2024-07-02T09:00:32.235943120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 2.269025699s" Jul 2 09:00:32.236331 containerd[2016]: time="2024-07-02T09:00:32.236062448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 09:00:32.237944 containerd[2016]: time="2024-07-02T09:00:32.237895724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 09:00:32.245191 containerd[2016]: time="2024-07-02T09:00:32.244025276Z" level=info msg="CreateContainer within sandbox \"745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 09:00:32.287608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount459469809.mount: Deactivated successfully. Jul 2 09:00:32.328665 containerd[2016]: time="2024-07-02T09:00:32.328575548Z" level=info msg="CreateContainer within sandbox \"745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9c5cff132d4820e3f8d2fa56faac3dfc5d7edf583bb0ba78df94cbfa361c7149\"" Jul 2 09:00:32.329683 containerd[2016]: time="2024-07-02T09:00:32.329627252Z" level=info msg="StartContainer for \"9c5cff132d4820e3f8d2fa56faac3dfc5d7edf583bb0ba78df94cbfa361c7149\"" Jul 2 09:00:32.416316 systemd[1]: Started cri-containerd-9c5cff132d4820e3f8d2fa56faac3dfc5d7edf583bb0ba78df94cbfa361c7149.scope - libcontainer container 9c5cff132d4820e3f8d2fa56faac3dfc5d7edf583bb0ba78df94cbfa361c7149. Jul 2 09:00:32.505946 containerd[2016]: time="2024-07-02T09:00:32.505511025Z" level=info msg="StartContainer for \"9c5cff132d4820e3f8d2fa56faac3dfc5d7edf583bb0ba78df94cbfa361c7149\" returns successfully" Jul 2 09:00:33.050975 ntpd[1983]: Listen normally on 8 vxlan.calico 192.168.63.0:123 Jul 2 09:00:33.052945 ntpd[1983]: 2 Jul 09:00:33 ntpd[1983]: Listen normally on 8 vxlan.calico 192.168.63.0:123 Jul 2 09:00:33.052945 ntpd[1983]: 2 Jul 09:00:33 ntpd[1983]: Listen normally on 9 vxlan.calico [fe80::64c9:2dff:fe2e:3490%4]:123 Jul 2 09:00:33.052945 ntpd[1983]: 2 Jul 09:00:33 ntpd[1983]: Listen normally on 10 cali180ab380623 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 09:00:33.052945 ntpd[1983]: 2 Jul 09:00:33 ntpd[1983]: Listen normally on 11 calib65d5ffe587 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 09:00:33.052945 ntpd[1983]: 2 Jul 09:00:33 ntpd[1983]: Listen normally on 12 cali8fe81971d77 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 09:00:33.052945 ntpd[1983]: 2 Jul 09:00:33 ntpd[1983]: Listen normally on 13 cali7c53ecfe562 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 09:00:33.051141 ntpd[1983]: Listen normally on 9 vxlan.calico [fe80::64c9:2dff:fe2e:3490%4]:123 Jul 2 09:00:33.051225 ntpd[1983]: Listen normally on 10 cali180ab380623 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 09:00:33.051312 ntpd[1983]: Listen normally on 11 calib65d5ffe587 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 09:00:33.051383 ntpd[1983]: Listen normally on 12 cali8fe81971d77 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 09:00:33.051448 ntpd[1983]: Listen normally on 13 cali7c53ecfe562 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 09:00:34.365242 systemd[1]: Started sshd@15-172.31.26.125:22-147.75.109.163:41912.service - OpenSSH per-connection server daemon (147.75.109.163:41912). Jul 2 09:00:34.544475 sshd[5906]: Accepted publickey for core from 147.75.109.163 port 41912 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:34.547404 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:34.555289 systemd-logind[1990]: New session 16 of user core. Jul 2 09:00:34.565979 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 09:00:34.821224 sshd[5906]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:34.828695 systemd[1]: sshd@15-172.31.26.125:22-147.75.109.163:41912.service: Deactivated successfully. Jul 2 09:00:34.833651 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 09:00:34.835990 systemd-logind[1990]: Session 16 logged out. Waiting for processes to exit. Jul 2 09:00:34.838142 systemd-logind[1990]: Removed session 16. Jul 2 09:00:35.733189 containerd[2016]: time="2024-07-02T09:00:35.733122025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:35.734991 containerd[2016]: time="2024-07-02T09:00:35.734898985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 09:00:35.737086 containerd[2016]: time="2024-07-02T09:00:35.737008153Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:35.741909 containerd[2016]: time="2024-07-02T09:00:35.741807805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:35.743613 containerd[2016]: time="2024-07-02T09:00:35.743419597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 3.505463129s" Jul 2 09:00:35.743613 containerd[2016]: time="2024-07-02T09:00:35.743474965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 09:00:35.746746 containerd[2016]: time="2024-07-02T09:00:35.746056309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 09:00:35.786632 containerd[2016]: time="2024-07-02T09:00:35.786445658Z" level=info msg="CreateContainer within sandbox \"1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 09:00:35.817835 containerd[2016]: time="2024-07-02T09:00:35.817759070Z" level=info msg="CreateContainer within sandbox \"1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e7eccc8aee7be4fb8fa75fa97d39c324c303e31f53f6116ed88ca98f4d5299b5\"" Jul 2 09:00:35.818511 containerd[2016]: time="2024-07-02T09:00:35.818455286Z" level=info msg="StartContainer for \"e7eccc8aee7be4fb8fa75fa97d39c324c303e31f53f6116ed88ca98f4d5299b5\"" Jul 2 09:00:35.877049 systemd[1]: Started cri-containerd-e7eccc8aee7be4fb8fa75fa97d39c324c303e31f53f6116ed88ca98f4d5299b5.scope - libcontainer container e7eccc8aee7be4fb8fa75fa97d39c324c303e31f53f6116ed88ca98f4d5299b5. Jul 2 09:00:35.950367 containerd[2016]: time="2024-07-02T09:00:35.950291402Z" level=info msg="StartContainer for \"e7eccc8aee7be4fb8fa75fa97d39c324c303e31f53f6116ed88ca98f4d5299b5\" returns successfully" Jul 2 09:00:36.682139 kubelet[3538]: I0702 09:00:36.681983 3538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84f95fb8c5-z2z9x" podStartSLOduration=36.29357378 podStartE2EDuration="41.68195111s" podCreationTimestamp="2024-07-02 08:59:55 +0000 UTC" firstStartedPulling="2024-07-02 09:00:30.356917663 +0000 UTC m=+60.547508690" lastFinishedPulling="2024-07-02 09:00:35.745294993 +0000 UTC m=+65.935886020" observedRunningTime="2024-07-02 09:00:36.680090858 +0000 UTC m=+66.870681885" watchObservedRunningTime="2024-07-02 09:00:36.68195111 +0000 UTC m=+66.872542137" Jul 2 09:00:37.168012 containerd[2016]: time="2024-07-02T09:00:37.167896020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:37.169675 containerd[2016]: time="2024-07-02T09:00:37.169598712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 09:00:37.171465 containerd[2016]: time="2024-07-02T09:00:37.171386436Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:37.177272 containerd[2016]: time="2024-07-02T09:00:37.177150396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:00:37.178759 containerd[2016]: time="2024-07-02T09:00:37.178583448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.432462723s" Jul 2 09:00:37.178759 containerd[2016]: time="2024-07-02T09:00:37.178644948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 09:00:37.183863 containerd[2016]: time="2024-07-02T09:00:37.183743400Z" level=info msg="CreateContainer within sandbox \"745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 09:00:37.207724 containerd[2016]: time="2024-07-02T09:00:37.207640981Z" level=info msg="CreateContainer within sandbox \"745abb9f0a59b255cf36806bbcf4262b51e33fb4b267a8ff3ee61d05f0230d18\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7a4a6205a23f7d3ade6124b22b8a01d0458b0b647ee8d0030e26ec99c4526f26\"" Jul 2 09:00:37.211001 containerd[2016]: time="2024-07-02T09:00:37.209890105Z" level=info msg="StartContainer for \"7a4a6205a23f7d3ade6124b22b8a01d0458b0b647ee8d0030e26ec99c4526f26\"" Jul 2 09:00:37.265410 systemd[1]: run-containerd-runc-k8s.io-7a4a6205a23f7d3ade6124b22b8a01d0458b0b647ee8d0030e26ec99c4526f26-runc.unHagy.mount: Deactivated successfully. Jul 2 09:00:37.276024 systemd[1]: Started cri-containerd-7a4a6205a23f7d3ade6124b22b8a01d0458b0b647ee8d0030e26ec99c4526f26.scope - libcontainer container 7a4a6205a23f7d3ade6124b22b8a01d0458b0b647ee8d0030e26ec99c4526f26. Jul 2 09:00:37.333302 containerd[2016]: time="2024-07-02T09:00:37.333221053Z" level=info msg="StartContainer for \"7a4a6205a23f7d3ade6124b22b8a01d0458b0b647ee8d0030e26ec99c4526f26\" returns successfully" Jul 2 09:00:37.679046 kubelet[3538]: I0702 09:00:37.678931 3538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dbzvw" podStartSLOduration=38.461835536 podStartE2EDuration="45.678908859s" podCreationTimestamp="2024-07-02 08:59:52 +0000 UTC" firstStartedPulling="2024-07-02 09:00:29.963497481 +0000 UTC m=+60.154088496" lastFinishedPulling="2024-07-02 09:00:37.180570792 +0000 UTC m=+67.371161819" observedRunningTime="2024-07-02 09:00:37.675516939 +0000 UTC m=+67.866107990" watchObservedRunningTime="2024-07-02 09:00:37.678908859 +0000 UTC m=+67.869499886" Jul 2 09:00:38.294746 kubelet[3538]: I0702 09:00:38.294647 3538 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 09:00:38.294746 kubelet[3538]: I0702 09:00:38.294693 3538 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 09:00:39.866200 systemd[1]: Started sshd@16-172.31.26.125:22-147.75.109.163:41918.service - OpenSSH per-connection server daemon (147.75.109.163:41918). Jul 2 09:00:40.045835 sshd[6035]: Accepted publickey for core from 147.75.109.163 port 41918 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:40.052686 sshd[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:40.068636 systemd-logind[1990]: New session 17 of user core. Jul 2 09:00:40.076099 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 09:00:40.407037 sshd[6035]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:40.418786 systemd[1]: sshd@16-172.31.26.125:22-147.75.109.163:41918.service: Deactivated successfully. Jul 2 09:00:40.424509 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 09:00:40.433135 systemd-logind[1990]: Session 17 logged out. Waiting for processes to exit. Jul 2 09:00:40.435452 systemd-logind[1990]: Removed session 17. Jul 2 09:00:45.449053 systemd[1]: Started sshd@17-172.31.26.125:22-147.75.109.163:42222.service - OpenSSH per-connection server daemon (147.75.109.163:42222). Jul 2 09:00:45.634115 sshd[6075]: Accepted publickey for core from 147.75.109.163 port 42222 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:45.636689 sshd[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:45.645207 systemd-logind[1990]: New session 18 of user core. Jul 2 09:00:45.653994 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 09:00:45.896811 sshd[6075]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:45.903688 systemd[1]: sshd@17-172.31.26.125:22-147.75.109.163:42222.service: Deactivated successfully. Jul 2 09:00:45.908132 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 09:00:45.910118 systemd-logind[1990]: Session 18 logged out. Waiting for processes to exit. Jul 2 09:00:45.912477 systemd-logind[1990]: Removed session 18. Jul 2 09:00:45.937247 systemd[1]: Started sshd@18-172.31.26.125:22-147.75.109.163:42232.service - OpenSSH per-connection server daemon (147.75.109.163:42232). Jul 2 09:00:46.115841 sshd[6092]: Accepted publickey for core from 147.75.109.163 port 42232 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:46.118438 sshd[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:46.127391 systemd-logind[1990]: New session 19 of user core. Jul 2 09:00:46.133976 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 09:00:46.568093 sshd[6092]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:46.573549 systemd[1]: sshd@18-172.31.26.125:22-147.75.109.163:42232.service: Deactivated successfully. Jul 2 09:00:46.576951 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 09:00:46.581663 systemd-logind[1990]: Session 19 logged out. Waiting for processes to exit. Jul 2 09:00:46.584109 systemd-logind[1990]: Removed session 19. Jul 2 09:00:46.608215 systemd[1]: Started sshd@19-172.31.26.125:22-147.75.109.163:42236.service - OpenSSH per-connection server daemon (147.75.109.163:42236). Jul 2 09:00:46.804822 sshd[6105]: Accepted publickey for core from 147.75.109.163 port 42236 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:46.807786 sshd[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:46.824891 systemd-logind[1990]: New session 20 of user core. Jul 2 09:00:46.832471 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 09:00:49.843895 sshd[6105]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:49.856821 systemd[1]: sshd@19-172.31.26.125:22-147.75.109.163:42236.service: Deactivated successfully. Jul 2 09:00:49.866129 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 09:00:49.888571 systemd-logind[1990]: Session 20 logged out. Waiting for processes to exit. Jul 2 09:00:49.895053 systemd[1]: Started sshd@20-172.31.26.125:22-147.75.109.163:42240.service - OpenSSH per-connection server daemon (147.75.109.163:42240). Jul 2 09:00:49.900290 systemd-logind[1990]: Removed session 20. Jul 2 09:00:50.089446 sshd[6144]: Accepted publickey for core from 147.75.109.163 port 42240 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:50.094629 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:50.108687 systemd-logind[1990]: New session 21 of user core. Jul 2 09:00:50.115151 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 09:00:50.699992 sshd[6144]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:50.705413 systemd[1]: sshd@20-172.31.26.125:22-147.75.109.163:42240.service: Deactivated successfully. Jul 2 09:00:50.709556 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 09:00:50.712771 systemd-logind[1990]: Session 21 logged out. Waiting for processes to exit. Jul 2 09:00:50.716000 systemd-logind[1990]: Removed session 21. Jul 2 09:00:50.734219 systemd[1]: Started sshd@21-172.31.26.125:22-147.75.109.163:42254.service - OpenSSH per-connection server daemon (147.75.109.163:42254). Jul 2 09:00:50.915145 sshd[6159]: Accepted publickey for core from 147.75.109.163 port 42254 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:50.917754 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:50.927368 systemd-logind[1990]: New session 22 of user core. Jul 2 09:00:50.931990 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 09:00:51.166397 sshd[6159]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:51.173910 systemd[1]: sshd@21-172.31.26.125:22-147.75.109.163:42254.service: Deactivated successfully. Jul 2 09:00:51.178196 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 09:00:51.180910 systemd-logind[1990]: Session 22 logged out. Waiting for processes to exit. Jul 2 09:00:51.182973 systemd-logind[1990]: Removed session 22. Jul 2 09:00:56.207446 systemd[1]: Started sshd@22-172.31.26.125:22-147.75.109.163:54534.service - OpenSSH per-connection server daemon (147.75.109.163:54534). Jul 2 09:00:56.286862 kubelet[3538]: I0702 09:00:56.286520 3538 topology_manager.go:215] "Topology Admit Handler" podUID="5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6" podNamespace="calico-apiserver" podName="calico-apiserver-54964cdd6f-7sp46" Jul 2 09:00:56.310348 kubelet[3538]: W0702 09:00:56.306057 3538 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-26-125" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-26-125' and this object Jul 2 09:00:56.310348 kubelet[3538]: E0702 09:00:56.306114 3538 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-26-125" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-26-125' and this object Jul 2 09:00:56.310348 kubelet[3538]: W0702 09:00:56.306178 3538 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-26-125" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-26-125' and this object Jul 2 09:00:56.310348 kubelet[3538]: E0702 09:00:56.306202 3538 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-26-125" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-26-125' and this object Jul 2 09:00:56.309214 systemd[1]: Created slice kubepods-besteffort-pod5ee8f0d8_90a5_46cb_8d0f_5791ab29ebd6.slice - libcontainer container kubepods-besteffort-pod5ee8f0d8_90a5_46cb_8d0f_5791ab29ebd6.slice. Jul 2 09:00:56.349917 kubelet[3538]: I0702 09:00:56.349799 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgn2d\" (UniqueName: \"kubernetes.io/projected/5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6-kube-api-access-fgn2d\") pod \"calico-apiserver-54964cdd6f-7sp46\" (UID: \"5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6\") " pod="calico-apiserver/calico-apiserver-54964cdd6f-7sp46" Jul 2 09:00:56.349917 kubelet[3538]: I0702 09:00:56.349881 3538 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6-calico-apiserver-certs\") pod \"calico-apiserver-54964cdd6f-7sp46\" (UID: \"5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6\") " pod="calico-apiserver/calico-apiserver-54964cdd6f-7sp46" Jul 2 09:00:56.432856 sshd[6174]: Accepted publickey for core from 147.75.109.163 port 54534 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:00:56.436407 sshd[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:00:56.450119 systemd-logind[1990]: New session 23 of user core. Jul 2 09:00:56.460398 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 09:00:56.754294 sshd[6174]: pam_unix(sshd:session): session closed for user core Jul 2 09:00:56.763260 systemd-logind[1990]: Session 23 logged out. Waiting for processes to exit. Jul 2 09:00:56.765052 systemd[1]: sshd@22-172.31.26.125:22-147.75.109.163:54534.service: Deactivated successfully. Jul 2 09:00:56.773197 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 09:00:56.783239 systemd-logind[1990]: Removed session 23. Jul 2 09:00:57.455973 kubelet[3538]: E0702 09:00:57.453309 3538 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 09:00:57.455973 kubelet[3538]: E0702 09:00:57.453424 3538 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6-calico-apiserver-certs podName:5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6 nodeName:}" failed. No retries permitted until 2024-07-02 09:00:57.953394101 +0000 UTC m=+88.143985128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6-calico-apiserver-certs") pod "calico-apiserver-54964cdd6f-7sp46" (UID: "5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6") : failed to sync secret cache: timed out waiting for the condition Jul 2 09:00:58.131644 containerd[2016]: time="2024-07-02T09:00:58.131431689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54964cdd6f-7sp46,Uid:5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6,Namespace:calico-apiserver,Attempt:0,}" Jul 2 09:00:58.422097 systemd-networkd[1929]: cali9179e7c6199: Link UP Jul 2 09:00:58.424063 systemd-networkd[1929]: cali9179e7c6199: Gained carrier Jul 2 09:00:58.439073 (udev-worker)[6218]: Network interface NamePolicy= disabled on kernel command line. Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.223 [INFO][6199] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0 calico-apiserver-54964cdd6f- calico-apiserver 5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6 1138 0 2024-07-02 09:00:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54964cdd6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-125 calico-apiserver-54964cdd6f-7sp46 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9179e7c6199 [] []}} ContainerID="d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" Namespace="calico-apiserver" Pod="calico-apiserver-54964cdd6f-7sp46" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-" Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.223 [INFO][6199] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" Namespace="calico-apiserver" Pod="calico-apiserver-54964cdd6f-7sp46" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0" Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.297 [INFO][6210] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" HandleID="k8s-pod-network.d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" Workload="ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0" Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.314 [INFO][6210] ipam_plugin.go 264: Auto assigning IP ContainerID="d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" HandleID="k8s-pod-network.d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" Workload="ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-125", "pod":"calico-apiserver-54964cdd6f-7sp46", "timestamp":"2024-07-02 09:00:58.297774609 +0000 UTC"}, Hostname:"ip-172-31-26-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.314 [INFO][6210] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.314 [INFO][6210] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.314 [INFO][6210] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-125' Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.317 [INFO][6210] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" host="ip-172-31-26-125" Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.330 [INFO][6210] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-125" Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.344 [INFO][6210] ipam.go 489: Trying affinity for 192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.352 [INFO][6210] ipam.go 155: Attempting to load block cidr=192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.359 [INFO][6210] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ip-172-31-26-125" Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.361 [INFO][6210] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" host="ip-172-31-26-125" Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.373 [INFO][6210] ipam.go 1685: Creating new handle: k8s-pod-network.d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.384 [INFO][6210] ipam.go 1203: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" host="ip-172-31-26-125" Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.411 [INFO][6210] ipam.go 1216: Successfully claimed IPs: [192.168.63.5/26] block=192.168.63.0/26 handle="k8s-pod-network.d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" host="ip-172-31-26-125" Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.411 [INFO][6210] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.5/26] handle="k8s-pod-network.d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" host="ip-172-31-26-125" Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.411 [INFO][6210] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:00:58.480315 containerd[2016]: 2024-07-02 09:00:58.411 [INFO][6210] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.63.5/26] IPv6=[] ContainerID="d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" HandleID="k8s-pod-network.d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" Workload="ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0" Jul 2 09:00:58.482350 containerd[2016]: 2024-07-02 09:00:58.416 [INFO][6199] k8s.go 386: Populated endpoint ContainerID="d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" Namespace="calico-apiserver" Pod="calico-apiserver-54964cdd6f-7sp46" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0", GenerateName:"calico-apiserver-54964cdd6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 0, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54964cdd6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"", Pod:"calico-apiserver-54964cdd6f-7sp46", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9179e7c6199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:58.482350 containerd[2016]: 2024-07-02 09:00:58.417 [INFO][6199] k8s.go 387: Calico CNI using IPs: [192.168.63.5/32] ContainerID="d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" Namespace="calico-apiserver" Pod="calico-apiserver-54964cdd6f-7sp46" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0" Jul 2 09:00:58.482350 containerd[2016]: 2024-07-02 09:00:58.417 [INFO][6199] dataplane_linux.go 68: Setting the host side veth name to cali9179e7c6199 ContainerID="d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" Namespace="calico-apiserver" Pod="calico-apiserver-54964cdd6f-7sp46" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0" Jul 2 09:00:58.482350 containerd[2016]: 2024-07-02 09:00:58.423 [INFO][6199] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" Namespace="calico-apiserver" Pod="calico-apiserver-54964cdd6f-7sp46" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0" Jul 2 09:00:58.482350 containerd[2016]: 2024-07-02 09:00:58.427 [INFO][6199] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" Namespace="calico-apiserver" Pod="calico-apiserver-54964cdd6f-7sp46" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0", GenerateName:"calico-apiserver-54964cdd6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 0, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54964cdd6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e", Pod:"calico-apiserver-54964cdd6f-7sp46", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9179e7c6199", MAC:"02:34:72:5e:7e:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:00:58.482350 containerd[2016]: 2024-07-02 09:00:58.473 [INFO][6199] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e" Namespace="calico-apiserver" Pod="calico-apiserver-54964cdd6f-7sp46" WorkloadEndpoint="ip--172--31--26--125-k8s-calico--apiserver--54964cdd6f--7sp46-eth0" Jul 2 09:00:58.549239 containerd[2016]: time="2024-07-02T09:00:58.547062263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:00:58.549239 containerd[2016]: time="2024-07-02T09:00:58.547158755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:58.549239 containerd[2016]: time="2024-07-02T09:00:58.547196039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:00:58.549239 containerd[2016]: time="2024-07-02T09:00:58.547221479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:00:58.638580 systemd[1]: Started cri-containerd-d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e.scope - libcontainer container d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e. Jul 2 09:00:58.759289 containerd[2016]: time="2024-07-02T09:00:58.759176628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54964cdd6f-7sp46,Uid:5ee8f0d8-90a5-46cb-8d0f-5791ab29ebd6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e\"" Jul 2 09:00:58.764126 containerd[2016]: time="2024-07-02T09:00:58.763654596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 09:01:00.404001 systemd-networkd[1929]: cali9179e7c6199: Gained IPv6LL Jul 2 09:01:01.452234 containerd[2016]: time="2024-07-02T09:01:01.451770517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:01:01.454998 containerd[2016]: time="2024-07-02T09:01:01.454922413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jul 2 09:01:01.457192 containerd[2016]: time="2024-07-02T09:01:01.457113961Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:01:01.463091 containerd[2016]: time="2024-07-02T09:01:01.463005841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:01:01.464848 containerd[2016]: time="2024-07-02T09:01:01.464612125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 2.700890461s" Jul 2 09:01:01.464848 containerd[2016]: time="2024-07-02T09:01:01.464667049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 09:01:01.471783 containerd[2016]: time="2024-07-02T09:01:01.471617917Z" level=info msg="CreateContainer within sandbox \"d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 09:01:01.504032 containerd[2016]: time="2024-07-02T09:01:01.503964649Z" level=info msg="CreateContainer within sandbox \"d0201635082ef91bc851a988da97c7b5ac7870b2d3b5c1f7463bbaa13986313e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"652df29eba955cc9e526280ac71b8f220ad5c4e2d45afe26aa95d5415ddb9a08\"" Jul 2 09:01:01.505410 containerd[2016]: time="2024-07-02T09:01:01.505129753Z" level=info msg="StartContainer for \"652df29eba955cc9e526280ac71b8f220ad5c4e2d45afe26aa95d5415ddb9a08\"" Jul 2 09:01:01.577334 systemd[1]: Started cri-containerd-652df29eba955cc9e526280ac71b8f220ad5c4e2d45afe26aa95d5415ddb9a08.scope - libcontainer container 652df29eba955cc9e526280ac71b8f220ad5c4e2d45afe26aa95d5415ddb9a08. Jul 2 09:01:01.661941 containerd[2016]: time="2024-07-02T09:01:01.661643042Z" level=info msg="StartContainer for \"652df29eba955cc9e526280ac71b8f220ad5c4e2d45afe26aa95d5415ddb9a08\" returns successfully" Jul 2 09:01:01.745882 kubelet[3538]: I0702 09:01:01.745667 3538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54964cdd6f-7sp46" podStartSLOduration=3.041998401 podStartE2EDuration="5.745615694s" podCreationTimestamp="2024-07-02 09:00:56 +0000 UTC" firstStartedPulling="2024-07-02 09:00:58.763001568 +0000 UTC m=+88.953592595" lastFinishedPulling="2024-07-02 09:01:01.466618873 +0000 UTC m=+91.657209888" observedRunningTime="2024-07-02 09:01:01.745616978 +0000 UTC m=+91.936208029" watchObservedRunningTime="2024-07-02 09:01:01.745615694 +0000 UTC m=+91.936206721" Jul 2 09:01:01.799413 systemd[1]: Started sshd@23-172.31.26.125:22-147.75.109.163:54538.service - OpenSSH per-connection server daemon (147.75.109.163:54538). Jul 2 09:01:02.001999 sshd[6324]: Accepted publickey for core from 147.75.109.163 port 54538 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:01:02.002670 sshd[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:01:02.017802 systemd-logind[1990]: New session 24 of user core. Jul 2 09:01:02.026183 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 09:01:02.308130 sshd[6324]: pam_unix(sshd:session): session closed for user core Jul 2 09:01:02.317693 systemd[1]: sshd@23-172.31.26.125:22-147.75.109.163:54538.service: Deactivated successfully. Jul 2 09:01:02.322628 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 09:01:02.325296 systemd-logind[1990]: Session 24 logged out. Waiting for processes to exit. Jul 2 09:01:02.330880 systemd-logind[1990]: Removed session 24. Jul 2 09:01:03.051197 ntpd[1983]: Listen normally on 14 cali9179e7c6199 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 09:01:03.052095 ntpd[1983]: 2 Jul 09:01:03 ntpd[1983]: Listen normally on 14 cali9179e7c6199 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 09:01:07.358272 systemd[1]: Started sshd@24-172.31.26.125:22-147.75.109.163:58584.service - OpenSSH per-connection server daemon (147.75.109.163:58584). Jul 2 09:01:07.540379 sshd[6355]: Accepted publickey for core from 147.75.109.163 port 58584 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:01:07.543125 sshd[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:01:07.554816 systemd-logind[1990]: New session 25 of user core. Jul 2 09:01:07.562059 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 09:01:07.822123 sshd[6355]: pam_unix(sshd:session): session closed for user core Jul 2 09:01:07.832009 systemd[1]: sshd@24-172.31.26.125:22-147.75.109.163:58584.service: Deactivated successfully. Jul 2 09:01:07.838705 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 09:01:07.844899 systemd-logind[1990]: Session 25 logged out. Waiting for processes to exit. Jul 2 09:01:07.848583 systemd-logind[1990]: Removed session 25. Jul 2 09:01:12.863282 systemd[1]: Started sshd@25-172.31.26.125:22-147.75.109.163:38060.service - OpenSSH per-connection server daemon (147.75.109.163:38060). Jul 2 09:01:13.047867 sshd[6371]: Accepted publickey for core from 147.75.109.163 port 38060 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:01:13.051682 sshd[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:01:13.059799 systemd-logind[1990]: New session 26 of user core. Jul 2 09:01:13.066986 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 09:01:13.309356 sshd[6371]: pam_unix(sshd:session): session closed for user core Jul 2 09:01:13.315995 systemd[1]: sshd@25-172.31.26.125:22-147.75.109.163:38060.service: Deactivated successfully. Jul 2 09:01:13.320609 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 09:01:13.322823 systemd-logind[1990]: Session 26 logged out. Waiting for processes to exit. Jul 2 09:01:13.325687 systemd-logind[1990]: Removed session 26. Jul 2 09:01:18.349246 systemd[1]: Started sshd@26-172.31.26.125:22-147.75.109.163:38066.service - OpenSSH per-connection server daemon (147.75.109.163:38066). Jul 2 09:01:18.531579 sshd[6431]: Accepted publickey for core from 147.75.109.163 port 38066 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:01:18.534644 sshd[6431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:01:18.542809 systemd-logind[1990]: New session 27 of user core. Jul 2 09:01:18.552984 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 09:01:18.789195 sshd[6431]: pam_unix(sshd:session): session closed for user core Jul 2 09:01:18.795905 systemd[1]: sshd@26-172.31.26.125:22-147.75.109.163:38066.service: Deactivated successfully. Jul 2 09:01:18.800470 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 09:01:18.803128 systemd-logind[1990]: Session 27 logged out. Waiting for processes to exit. Jul 2 09:01:18.805640 systemd-logind[1990]: Removed session 27. Jul 2 09:01:21.061423 update_engine[1991]: I0702 09:01:21.061302 1991 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 09:01:21.061423 update_engine[1991]: I0702 09:01:21.061372 1991 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 09:01:21.062162 update_engine[1991]: I0702 09:01:21.061802 1991 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 09:01:21.063998 update_engine[1991]: I0702 09:01:21.063850 1991 omaha_request_params.cc:62] Current group set to beta Jul 2 09:01:21.063998 update_engine[1991]: I0702 09:01:21.064026 1991 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 09:01:21.063998 update_engine[1991]: I0702 09:01:21.064041 1991 update_attempter.cc:643] Scheduling an action processor start. Jul 2 09:01:21.064267 update_engine[1991]: I0702 09:01:21.064066 1991 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 09:01:21.064267 update_engine[1991]: I0702 09:01:21.064128 1991 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 09:01:21.064267 update_engine[1991]: I0702 09:01:21.064226 1991 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 09:01:21.064267 update_engine[1991]: I0702 09:01:21.064239 1991 omaha_request_action.cc:272] Request: Jul 2 09:01:21.064267 update_engine[1991]: Jul 2 09:01:21.064267 update_engine[1991]: Jul 2 09:01:21.064267 update_engine[1991]: Jul 2 09:01:21.064267 update_engine[1991]: Jul 2 09:01:21.064267 update_engine[1991]: Jul 2 09:01:21.064267 update_engine[1991]: Jul 2 09:01:21.064267 update_engine[1991]: Jul 2 09:01:21.064267 update_engine[1991]: Jul 2 09:01:21.064267 update_engine[1991]: I0702 09:01:21.064249 1991 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 09:01:21.065647 locksmithd[2028]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 09:01:21.068582 update_engine[1991]: I0702 09:01:21.068518 1991 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 09:01:21.069030 update_engine[1991]: I0702 09:01:21.068982 1991 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 09:01:21.077487 update_engine[1991]: E0702 09:01:21.077435 1991 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 09:01:21.077621 update_engine[1991]: I0702 09:01:21.077539 1991 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 09:01:23.829224 systemd[1]: Started sshd@27-172.31.26.125:22-147.75.109.163:46732.service - OpenSSH per-connection server daemon (147.75.109.163:46732). Jul 2 09:01:24.011776 sshd[6446]: Accepted publickey for core from 147.75.109.163 port 46732 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:01:24.014367 sshd[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:01:24.022912 systemd-logind[1990]: New session 28 of user core. Jul 2 09:01:24.027994 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 09:01:24.264469 sshd[6446]: pam_unix(sshd:session): session closed for user core Jul 2 09:01:24.271193 systemd[1]: sshd@27-172.31.26.125:22-147.75.109.163:46732.service: Deactivated successfully. Jul 2 09:01:24.276148 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 09:01:24.277617 systemd-logind[1990]: Session 28 logged out. Waiting for processes to exit. Jul 2 09:01:24.279447 systemd-logind[1990]: Removed session 28. Jul 2 09:01:29.306328 systemd[1]: Started sshd@28-172.31.26.125:22-147.75.109.163:46738.service - OpenSSH per-connection server daemon (147.75.109.163:46738). Jul 2 09:01:29.482614 sshd[6463]: Accepted publickey for core from 147.75.109.163 port 46738 ssh2: RSA SHA256:gBHRyphzFit/GiT6THj2ofQNJnkVrUD4ZXRbaD6jNmo Jul 2 09:01:29.485171 sshd[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:01:29.492782 systemd-logind[1990]: New session 29 of user core. Jul 2 09:01:29.504978 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 09:01:29.747074 sshd[6463]: pam_unix(sshd:session): session closed for user core Jul 2 09:01:29.753390 systemd[1]: sshd@28-172.31.26.125:22-147.75.109.163:46738.service: Deactivated successfully. Jul 2 09:01:29.759451 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 09:01:29.763138 systemd-logind[1990]: Session 29 logged out. Waiting for processes to exit. Jul 2 09:01:29.765407 systemd-logind[1990]: Removed session 29. Jul 2 09:01:31.064095 update_engine[1991]: I0702 09:01:31.061766 1991 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 09:01:31.064095 update_engine[1991]: I0702 09:01:31.062059 1991 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 09:01:31.064095 update_engine[1991]: I0702 09:01:31.062383 1991 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 09:01:31.067441 update_engine[1991]: E0702 09:01:31.067134 1991 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 09:01:31.067441 update_engine[1991]: I0702 09:01:31.067226 1991 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 09:01:31.586790 containerd[2016]: time="2024-07-02T09:01:31.586731787Z" level=info msg="StopPodSandbox for \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\"" Jul 2 09:01:31.713899 containerd[2016]: 2024-07-02 09:01:31.647 [WARNING][6489] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0", GenerateName:"calico-kube-controllers-84f95fb8c5-", Namespace:"calico-system", SelfLink:"", UID:"17d9130b-b05e-4363-bd1a-27eab10c52c9", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84f95fb8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da", Pod:"calico-kube-controllers-84f95fb8c5-z2z9x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c53ecfe562", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:01:31.713899 containerd[2016]: 2024-07-02 09:01:31.648 [INFO][6489] k8s.go 608: Cleaning up netns ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:01:31.713899 containerd[2016]: 2024-07-02 09:01:31.648 [INFO][6489] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" iface="eth0" netns="" Jul 2 09:01:31.713899 containerd[2016]: 2024-07-02 09:01:31.648 [INFO][6489] k8s.go 615: Releasing IP address(es) ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:01:31.713899 containerd[2016]: 2024-07-02 09:01:31.648 [INFO][6489] utils.go 188: Calico CNI releasing IP address ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:01:31.713899 containerd[2016]: 2024-07-02 09:01:31.690 [INFO][6495] ipam_plugin.go 411: Releasing address using handleID ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" HandleID="k8s-pod-network.95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Workload="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:01:31.713899 containerd[2016]: 2024-07-02 09:01:31.690 [INFO][6495] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:01:31.713899 containerd[2016]: 2024-07-02 09:01:31.690 [INFO][6495] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:01:31.713899 containerd[2016]: 2024-07-02 09:01:31.704 [WARNING][6495] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" HandleID="k8s-pod-network.95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Workload="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:01:31.713899 containerd[2016]: 2024-07-02 09:01:31.704 [INFO][6495] ipam_plugin.go 439: Releasing address using workloadID ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" HandleID="k8s-pod-network.95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Workload="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:01:31.713899 containerd[2016]: 2024-07-02 09:01:31.708 [INFO][6495] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:01:31.713899 containerd[2016]: 2024-07-02 09:01:31.711 [INFO][6489] k8s.go 621: Teardown processing complete. ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:01:31.715981 containerd[2016]: time="2024-07-02T09:01:31.714922555Z" level=info msg="TearDown network for sandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\" successfully" Jul 2 09:01:31.715981 containerd[2016]: time="2024-07-02T09:01:31.714988495Z" level=info msg="StopPodSandbox for \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\" returns successfully" Jul 2 09:01:31.716850 containerd[2016]: time="2024-07-02T09:01:31.716323423Z" level=info msg="RemovePodSandbox for \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\"" Jul 2 09:01:31.716850 containerd[2016]: time="2024-07-02T09:01:31.716373991Z" level=info msg="Forcibly stopping sandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\"" Jul 2 09:01:31.832851 containerd[2016]: 2024-07-02 09:01:31.776 [WARNING][6515] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0", GenerateName:"calico-kube-controllers-84f95fb8c5-", Namespace:"calico-system", SelfLink:"", UID:"17d9130b-b05e-4363-bd1a-27eab10c52c9", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 59, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84f95fb8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-125", ContainerID:"1c75359f9ccade1d2d4884afdcd5c029623465631c2d7fcee97fe02ed219f9da", Pod:"calico-kube-controllers-84f95fb8c5-z2z9x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c53ecfe562", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:01:31.832851 containerd[2016]: 2024-07-02 09:01:31.776 [INFO][6515] k8s.go 608: Cleaning up netns ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:01:31.832851 containerd[2016]: 2024-07-02 09:01:31.776 [INFO][6515] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" iface="eth0" netns="" Jul 2 09:01:31.832851 containerd[2016]: 2024-07-02 09:01:31.776 [INFO][6515] k8s.go 615: Releasing IP address(es) ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:01:31.832851 containerd[2016]: 2024-07-02 09:01:31.777 [INFO][6515] utils.go 188: Calico CNI releasing IP address ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:01:31.832851 containerd[2016]: 2024-07-02 09:01:31.811 [INFO][6522] ipam_plugin.go 411: Releasing address using handleID ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" HandleID="k8s-pod-network.95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Workload="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:01:31.832851 containerd[2016]: 2024-07-02 09:01:31.811 [INFO][6522] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:01:31.832851 containerd[2016]: 2024-07-02 09:01:31.811 [INFO][6522] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:01:31.832851 containerd[2016]: 2024-07-02 09:01:31.825 [WARNING][6522] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" HandleID="k8s-pod-network.95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Workload="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:01:31.832851 containerd[2016]: 2024-07-02 09:01:31.825 [INFO][6522] ipam_plugin.go 439: Releasing address using workloadID ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" HandleID="k8s-pod-network.95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Workload="ip--172--31--26--125-k8s-calico--kube--controllers--84f95fb8c5--z2z9x-eth0" Jul 2 09:01:31.832851 containerd[2016]: 2024-07-02 09:01:31.828 [INFO][6522] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:01:31.832851 containerd[2016]: 2024-07-02 09:01:31.830 [INFO][6515] k8s.go 621: Teardown processing complete. ContainerID="95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521" Jul 2 09:01:31.834185 containerd[2016]: time="2024-07-02T09:01:31.832876256Z" level=info msg="TearDown network for sandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\" successfully" Jul 2 09:01:31.839980 containerd[2016]: time="2024-07-02T09:01:31.839834852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:01:31.839980 containerd[2016]: time="2024-07-02T09:01:31.839940812Z" level=info msg="RemovePodSandbox \"95483a33030a8797c51ad0c57f775bba9e5f1d97d76081f6502f63a1008fd521\" returns successfully"