Aug 5 21:35:06.274821 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Aug 5 21:35:06.274873 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:24:20 -00 2024 Aug 5 21:35:06.274898 kernel: KASLR disabled due to lack of seed Aug 5 21:35:06.274915 kernel: efi: EFI v2.7 by EDK II Aug 5 21:35:06.279010 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Aug 5 21:35:06.279072 kernel: ACPI: Early table checksum verification disabled Aug 5 21:35:06.279092 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Aug 5 21:35:06.279108 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Aug 5 21:35:06.279125 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 5 21:35:06.279142 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Aug 5 21:35:06.279173 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 5 21:35:06.279190 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Aug 5 21:35:06.279206 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Aug 5 21:35:06.279223 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Aug 5 21:35:06.279243 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 5 21:35:06.279265 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Aug 5 21:35:06.279283 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Aug 5 21:35:06.279299 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Aug 5 21:35:06.279316 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Aug 5 21:35:06.279333 kernel: printk: bootconsole [uart0] enabled Aug 5 21:35:06.279350 kernel: NUMA: Failed to initialise from firmware Aug 5 21:35:06.279367 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Aug 5 21:35:06.279384 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Aug 5 21:35:06.279401 kernel: Zone ranges: Aug 5 21:35:06.279419 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 5 21:35:06.279435 kernel: DMA32 empty Aug 5 21:35:06.279456 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Aug 5 21:35:06.279473 kernel: Movable zone start for each node Aug 5 21:35:06.279489 kernel: Early memory node ranges Aug 5 21:35:06.279506 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Aug 5 21:35:06.279522 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Aug 5 21:35:06.279539 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Aug 5 21:35:06.279555 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Aug 5 21:35:06.279573 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Aug 5 21:35:06.279589 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Aug 5 21:35:06.279606 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Aug 5 21:35:06.279622 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Aug 5 21:35:06.279639 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Aug 5 21:35:06.279660 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Aug 5 21:35:06.279677 kernel: psci: probing for conduit method from ACPI. Aug 5 21:35:06.279701 kernel: psci: PSCIv1.0 detected in firmware. Aug 5 21:35:06.279719 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 21:35:06.279737 kernel: psci: Trusted OS migration not required Aug 5 21:35:06.279759 kernel: psci: SMC Calling Convention v1.1 Aug 5 21:35:06.279777 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 21:35:06.279795 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 21:35:06.279813 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 5 21:35:06.279831 kernel: Detected PIPT I-cache on CPU0 Aug 5 21:35:06.279848 kernel: CPU features: detected: GIC system register CPU interface Aug 5 21:35:06.279866 kernel: CPU features: detected: Spectre-v2 Aug 5 21:35:06.279884 kernel: CPU features: detected: Spectre-v3a Aug 5 21:35:06.279902 kernel: CPU features: detected: Spectre-BHB Aug 5 21:35:06.279919 kernel: CPU features: detected: ARM erratum 1742098 Aug 5 21:35:06.279986 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Aug 5 21:35:06.280015 kernel: alternatives: applying boot alternatives Aug 5 21:35:06.280036 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:35:06.280064 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 21:35:06.280082 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 21:35:06.280100 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 21:35:06.280118 kernel: Fallback order for Node 0: 0 Aug 5 21:35:06.280136 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Aug 5 21:35:06.280155 kernel: Policy zone: Normal Aug 5 21:35:06.280172 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 21:35:06.280190 kernel: software IO TLB: area num 2. Aug 5 21:35:06.280208 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Aug 5 21:35:06.280233 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Aug 5 21:35:06.280252 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 21:35:06.280295 kernel: trace event string verifier disabled Aug 5 21:35:06.280314 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 21:35:06.280335 kernel: rcu: RCU event tracing is enabled. Aug 5 21:35:06.280354 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 21:35:06.280373 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 21:35:06.280391 kernel: Tracing variant of Tasks RCU enabled. Aug 5 21:35:06.280409 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 21:35:06.280428 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 21:35:06.280447 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 21:35:06.280473 kernel: GICv3: 96 SPIs implemented Aug 5 21:35:06.280491 kernel: GICv3: 0 Extended SPIs implemented Aug 5 21:35:06.280508 kernel: Root IRQ handler: gic_handle_irq Aug 5 21:35:06.280527 kernel: GICv3: GICv3 features: 16 PPIs Aug 5 21:35:06.280544 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Aug 5 21:35:06.280562 kernel: ITS [mem 0x10080000-0x1009ffff] Aug 5 21:35:06.280580 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Aug 5 21:35:06.280598 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Aug 5 21:35:06.280617 kernel: GICv3: using LPI property table @0x00000004000e0000 Aug 5 21:35:06.280636 kernel: ITS: Using hypervisor restricted LPI range [128] Aug 5 21:35:06.280653 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Aug 5 21:35:06.280671 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 21:35:06.280694 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Aug 5 21:35:06.280712 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Aug 5 21:35:06.280732 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Aug 5 21:35:06.280751 kernel: Console: colour dummy device 80x25 Aug 5 21:35:06.280771 kernel: printk: console [tty1] enabled Aug 5 21:35:06.280791 kernel: ACPI: Core revision 20230628 Aug 5 21:35:06.280811 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Aug 5 21:35:06.280830 kernel: pid_max: default: 32768 minimum: 301 Aug 5 21:35:06.280850 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 21:35:06.280874 kernel: SELinux: Initializing. Aug 5 21:35:06.280893 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:35:06.280912 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:35:06.281084 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:35:06.281111 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:35:06.281129 kernel: rcu: Hierarchical SRCU implementation. Aug 5 21:35:06.281148 kernel: rcu: Max phase no-delay instances is 400. Aug 5 21:35:06.281166 kernel: Platform MSI: ITS@0x10080000 domain created Aug 5 21:35:06.281184 kernel: PCI/MSI: ITS@0x10080000 domain created Aug 5 21:35:06.281214 kernel: Remapping and enabling EFI services. Aug 5 21:35:06.281232 kernel: smp: Bringing up secondary CPUs ... Aug 5 21:35:06.281250 kernel: Detected PIPT I-cache on CPU1 Aug 5 21:35:06.281268 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Aug 5 21:35:06.281287 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Aug 5 21:35:06.281305 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Aug 5 21:35:06.281323 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 21:35:06.281342 kernel: SMP: Total of 2 processors activated. Aug 5 21:35:06.281360 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 21:35:06.281378 kernel: CPU features: detected: 32-bit EL1 Support Aug 5 21:35:06.281402 kernel: CPU features: detected: CRC32 instructions Aug 5 21:35:06.281422 kernel: CPU: All CPU(s) started at EL1 Aug 5 21:35:06.281455 kernel: alternatives: applying system-wide alternatives Aug 5 21:35:06.281479 kernel: devtmpfs: initialized Aug 5 21:35:06.281498 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 21:35:06.281519 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 21:35:06.281538 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 21:35:06.281556 kernel: SMBIOS 3.0.0 present. Aug 5 21:35:06.281575 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Aug 5 21:35:06.281598 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 21:35:06.281617 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 21:35:06.281636 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 21:35:06.281655 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 21:35:06.281673 kernel: audit: initializing netlink subsys (disabled) Aug 5 21:35:06.281692 kernel: audit: type=2000 audit(0.304:1): state=initialized audit_enabled=0 res=1 Aug 5 21:35:06.281711 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 21:35:06.281735 kernel: cpuidle: using governor menu Aug 5 21:35:06.281794 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 21:35:06.281835 kernel: ASID allocator initialised with 65536 entries Aug 5 21:35:06.281855 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 21:35:06.281874 kernel: Serial: AMBA PL011 UART driver Aug 5 21:35:06.281893 kernel: Modules: 17600 pages in range for non-PLT usage Aug 5 21:35:06.281911 kernel: Modules: 509120 pages in range for PLT usage Aug 5 21:35:06.283989 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 21:35:06.284029 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 21:35:06.284058 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 21:35:06.284078 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 21:35:06.284097 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 21:35:06.284116 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 21:35:06.284135 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 21:35:06.284154 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 21:35:06.284173 kernel: ACPI: Added _OSI(Module Device) Aug 5 21:35:06.284191 kernel: ACPI: Added _OSI(Processor Device) Aug 5 21:35:06.284210 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 21:35:06.284233 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 21:35:06.284254 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 21:35:06.284272 kernel: ACPI: Interpreter enabled Aug 5 21:35:06.284291 kernel: ACPI: Using GIC for interrupt routing Aug 5 21:35:06.284309 kernel: ACPI: MCFG table detected, 1 entries Aug 5 21:35:06.284328 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Aug 5 21:35:06.284650 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 21:35:06.284863 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 5 21:35:06.286068 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 5 21:35:06.286342 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Aug 5 21:35:06.286568 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Aug 5 21:35:06.286597 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Aug 5 21:35:06.286617 kernel: acpiphp: Slot [1] registered Aug 5 21:35:06.286636 kernel: acpiphp: Slot [2] registered Aug 5 21:35:06.286655 kernel: acpiphp: Slot [3] registered Aug 5 21:35:06.286675 kernel: acpiphp: Slot [4] registered Aug 5 21:35:06.286705 kernel: acpiphp: Slot [5] registered Aug 5 21:35:06.286724 kernel: acpiphp: Slot [6] registered Aug 5 21:35:06.286743 kernel: acpiphp: Slot [7] registered Aug 5 21:35:06.286763 kernel: acpiphp: Slot [8] registered Aug 5 21:35:06.286782 kernel: acpiphp: Slot [9] registered Aug 5 21:35:06.286800 kernel: acpiphp: Slot [10] registered Aug 5 21:35:06.286820 kernel: acpiphp: Slot [11] registered Aug 5 21:35:06.286839 kernel: acpiphp: Slot [12] registered Aug 5 21:35:06.286859 kernel: acpiphp: Slot [13] registered Aug 5 21:35:06.286879 kernel: acpiphp: Slot [14] registered Aug 5 21:35:06.286904 kernel: acpiphp: Slot [15] registered Aug 5 21:35:06.286923 kernel: acpiphp: Slot [16] registered Aug 5 21:35:06.289061 kernel: acpiphp: Slot [17] registered Aug 5 21:35:06.289085 kernel: acpiphp: Slot [18] registered Aug 5 21:35:06.289104 kernel: acpiphp: Slot [19] registered Aug 5 21:35:06.289123 kernel: acpiphp: Slot [20] registered Aug 5 21:35:06.289142 kernel: acpiphp: Slot [21] registered Aug 5 21:35:06.289161 kernel: acpiphp: Slot [22] registered Aug 5 21:35:06.289181 kernel: acpiphp: Slot [23] registered Aug 5 21:35:06.289214 kernel: acpiphp: Slot [24] registered Aug 5 21:35:06.289234 kernel: acpiphp: Slot [25] registered Aug 5 21:35:06.289252 kernel: acpiphp: Slot [26] registered Aug 5 21:35:06.289272 kernel: acpiphp: Slot [27] registered Aug 5 21:35:06.289290 kernel: acpiphp: Slot [28] registered Aug 5 21:35:06.289309 kernel: acpiphp: Slot [29] registered Aug 5 21:35:06.289327 kernel: acpiphp: Slot [30] registered Aug 5 21:35:06.289347 kernel: acpiphp: Slot [31] registered Aug 5 21:35:06.289365 kernel: PCI host bridge to bus 0000:00 Aug 5 21:35:06.289765 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Aug 5 21:35:06.292334 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 5 21:35:06.292535 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Aug 5 21:35:06.292716 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Aug 5 21:35:06.295043 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Aug 5 21:35:06.295341 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Aug 5 21:35:06.295563 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Aug 5 21:35:06.295809 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 5 21:35:06.296151 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Aug 5 21:35:06.296372 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 5 21:35:06.296598 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 5 21:35:06.296807 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Aug 5 21:35:06.298057 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Aug 5 21:35:06.298323 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Aug 5 21:35:06.298530 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 5 21:35:06.298733 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Aug 5 21:35:06.298955 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Aug 5 21:35:06.299179 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Aug 5 21:35:06.299390 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Aug 5 21:35:06.299625 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Aug 5 21:35:06.299827 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Aug 5 21:35:06.300137 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 5 21:35:06.300335 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Aug 5 21:35:06.300363 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 5 21:35:06.300384 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 5 21:35:06.300403 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 5 21:35:06.300422 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 5 21:35:06.300441 kernel: iommu: Default domain type: Translated Aug 5 21:35:06.300460 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 21:35:06.300490 kernel: efivars: Registered efivars operations Aug 5 21:35:06.300509 kernel: vgaarb: loaded Aug 5 21:35:06.300528 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 21:35:06.300547 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 21:35:06.300566 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 21:35:06.300585 kernel: pnp: PnP ACPI init Aug 5 21:35:06.300831 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Aug 5 21:35:06.300860 kernel: pnp: PnP ACPI: found 1 devices Aug 5 21:35:06.300885 kernel: NET: Registered PF_INET protocol family Aug 5 21:35:06.300906 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 21:35:06.300994 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 21:35:06.301022 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 21:35:06.301042 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 21:35:06.301062 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 21:35:06.301081 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 21:35:06.301100 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:35:06.301119 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:35:06.301147 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 21:35:06.301166 kernel: PCI: CLS 0 bytes, default 64 Aug 5 21:35:06.301185 kernel: kvm [1]: HYP mode not available Aug 5 21:35:06.301203 kernel: Initialise system trusted keyrings Aug 5 21:35:06.301222 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 21:35:06.301241 kernel: Key type asymmetric registered Aug 5 21:35:06.301260 kernel: Asymmetric key parser 'x509' registered Aug 5 21:35:06.301279 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 21:35:06.301298 kernel: io scheduler mq-deadline registered Aug 5 21:35:06.301323 kernel: io scheduler kyber registered Aug 5 21:35:06.301341 kernel: io scheduler bfq registered Aug 5 21:35:06.301594 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Aug 5 21:35:06.301625 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 5 21:35:06.301645 kernel: ACPI: button: Power Button [PWRB] Aug 5 21:35:06.301664 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Aug 5 21:35:06.301683 kernel: ACPI: button: Sleep Button [SLPB] Aug 5 21:35:06.301701 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 21:35:06.301727 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 5 21:35:06.303123 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Aug 5 21:35:06.303188 kernel: printk: console [ttyS0] disabled Aug 5 21:35:06.305158 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Aug 5 21:35:06.305184 kernel: printk: console [ttyS0] enabled Aug 5 21:35:06.305204 kernel: printk: bootconsole [uart0] disabled Aug 5 21:35:06.305225 kernel: thunder_xcv, ver 1.0 Aug 5 21:35:06.305244 kernel: thunder_bgx, ver 1.0 Aug 5 21:35:06.305264 kernel: nicpf, ver 1.0 Aug 5 21:35:06.305302 kernel: nicvf, ver 1.0 Aug 5 21:35:06.305606 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 21:35:06.305828 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T21:35:05 UTC (1722893705) Aug 5 21:35:06.305855 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 21:35:06.305875 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Aug 5 21:35:06.305894 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 21:35:06.305913 kernel: watchdog: Hard watchdog permanently disabled Aug 5 21:35:06.305951 kernel: NET: Registered PF_INET6 protocol family Aug 5 21:35:06.305979 kernel: Segment Routing with IPv6 Aug 5 21:35:06.305998 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 21:35:06.306017 kernel: NET: Registered PF_PACKET protocol family Aug 5 21:35:06.306035 kernel: Key type dns_resolver registered Aug 5 21:35:06.306054 kernel: registered taskstats version 1 Aug 5 21:35:06.306072 kernel: Loading compiled-in X.509 certificates Aug 5 21:35:06.306091 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 7b6de7a842f23ac7c1bb6bedfb9546933daaea09' Aug 5 21:35:06.306110 kernel: Key type .fscrypt registered Aug 5 21:35:06.306128 kernel: Key type fscrypt-provisioning registered Aug 5 21:35:06.306146 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 21:35:06.306170 kernel: ima: Allocated hash algorithm: sha1 Aug 5 21:35:06.306189 kernel: ima: No architecture policies found Aug 5 21:35:06.306208 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 21:35:06.306227 kernel: clk: Disabling unused clocks Aug 5 21:35:06.306245 kernel: Freeing unused kernel memory: 39040K Aug 5 21:35:06.306264 kernel: Run /init as init process Aug 5 21:35:06.306283 kernel: with arguments: Aug 5 21:35:06.306301 kernel: /init Aug 5 21:35:06.306319 kernel: with environment: Aug 5 21:35:06.306343 kernel: HOME=/ Aug 5 21:35:06.306362 kernel: TERM=linux Aug 5 21:35:06.306380 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 21:35:06.306403 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:35:06.306427 systemd[1]: Detected virtualization amazon. Aug 5 21:35:06.306448 systemd[1]: Detected architecture arm64. Aug 5 21:35:06.306468 systemd[1]: Running in initrd. Aug 5 21:35:06.306492 systemd[1]: No hostname configured, using default hostname. Aug 5 21:35:06.306512 systemd[1]: Hostname set to . Aug 5 21:35:06.306533 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:35:06.306554 systemd[1]: Queued start job for default target initrd.target. Aug 5 21:35:06.306574 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:35:06.306596 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:35:06.306618 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 21:35:06.306639 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:35:06.306665 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 21:35:06.306687 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 21:35:06.306712 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 21:35:06.306735 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 21:35:06.306756 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:35:06.306778 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:35:06.306799 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:35:06.306825 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:35:06.306846 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:35:06.306866 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:35:06.306889 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:35:06.306911 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:35:06.309990 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 21:35:06.310043 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 21:35:06.310065 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:35:06.310086 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:35:06.310122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:35:06.310143 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:35:06.310164 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 21:35:06.310184 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:35:06.310205 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 21:35:06.310225 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 21:35:06.310246 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:35:06.310267 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:35:06.310292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:35:06.310313 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 21:35:06.310333 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:35:06.310399 systemd-journald[250]: Collecting audit messages is disabled. Aug 5 21:35:06.310449 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 21:35:06.310473 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:35:06.310493 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 21:35:06.310514 systemd-journald[250]: Journal started Aug 5 21:35:06.310556 systemd-journald[250]: Runtime Journal (/run/log/journal/ec280715800cbb45a2005ebda5c1c5e3) is 8.0M, max 75.3M, 67.3M free. Aug 5 21:35:06.280013 systemd-modules-load[251]: Inserted module 'overlay' Aug 5 21:35:06.319400 kernel: Bridge firewalling registered Aug 5 21:35:06.319475 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:35:06.316663 systemd-modules-load[251]: Inserted module 'br_netfilter' Aug 5 21:35:06.327477 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:35:06.345534 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:35:06.350575 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:35:06.357172 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:35:06.373002 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:35:06.391752 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:35:06.398556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:35:06.405239 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:35:06.449126 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:35:06.457630 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:35:06.478662 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:35:06.485362 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:35:06.505315 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 21:35:06.558008 dracut-cmdline[290]: dracut-dracut-053 Aug 5 21:35:06.564801 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:35:06.599123 systemd-resolved[285]: Positive Trust Anchors: Aug 5 21:35:06.599179 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:35:06.599244 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:35:06.745968 kernel: SCSI subsystem initialized Aug 5 21:35:06.755955 kernel: Loading iSCSI transport class v2.0-870. Aug 5 21:35:06.767971 kernel: iscsi: registered transport (tcp) Aug 5 21:35:06.794305 kernel: iscsi: registered transport (qla4xxx) Aug 5 21:35:06.794435 kernel: QLogic iSCSI HBA Driver Aug 5 21:35:06.860011 kernel: random: crng init done Aug 5 21:35:06.860345 systemd-resolved[285]: Defaulting to hostname 'linux'. Aug 5 21:35:06.864566 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:35:06.867493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:35:06.908525 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 21:35:06.921206 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 21:35:06.960984 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 21:35:06.961130 kernel: device-mapper: uevent: version 1.0.3 Aug 5 21:35:06.962993 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 21:35:07.031997 kernel: raid6: neonx8 gen() 6720 MB/s Aug 5 21:35:07.048965 kernel: raid6: neonx4 gen() 6512 MB/s Aug 5 21:35:07.065963 kernel: raid6: neonx2 gen() 5427 MB/s Aug 5 21:35:07.082965 kernel: raid6: neonx1 gen() 3947 MB/s Aug 5 21:35:07.099967 kernel: raid6: int64x8 gen() 3821 MB/s Aug 5 21:35:07.116969 kernel: raid6: int64x4 gen() 3711 MB/s Aug 5 21:35:07.133964 kernel: raid6: int64x2 gen() 3606 MB/s Aug 5 21:35:07.151709 kernel: raid6: int64x1 gen() 2756 MB/s Aug 5 21:35:07.151755 kernel: raid6: using algorithm neonx8 gen() 6720 MB/s Aug 5 21:35:07.169647 kernel: raid6: .... xor() 4878 MB/s, rmw enabled Aug 5 21:35:07.169691 kernel: raid6: using neon recovery algorithm Aug 5 21:35:07.177979 kernel: xor: measuring software checksum speed Aug 5 21:35:07.180670 kernel: 8regs : 10953 MB/sec Aug 5 21:35:07.180723 kernel: 32regs : 11922 MB/sec Aug 5 21:35:07.183429 kernel: arm64_neon : 9628 MB/sec Aug 5 21:35:07.183468 kernel: xor: using function: 32regs (11922 MB/sec) Aug 5 21:35:07.276997 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 21:35:07.305621 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:35:07.316285 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:35:07.369614 systemd-udevd[471]: Using default interface naming scheme 'v255'. Aug 5 21:35:07.382058 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:35:07.399500 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 21:35:07.449094 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Aug 5 21:35:07.526991 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:35:07.539251 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:35:07.668455 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:35:07.681417 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 21:35:07.720659 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 21:35:07.727330 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:35:07.746167 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:35:07.750798 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:35:07.756243 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 21:35:07.808322 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:35:07.885409 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 5 21:35:07.885558 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Aug 5 21:35:07.909747 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 5 21:35:07.910171 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 5 21:35:07.910415 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:01:fe:4b:56:e1 Aug 5 21:35:07.912900 (udev-worker)[515]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:35:07.924498 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:35:07.924825 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:35:07.945176 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:35:07.958738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:35:07.960835 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:35:07.963049 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:35:07.971892 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 5 21:35:07.971948 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 5 21:35:07.979467 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:35:07.987073 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 5 21:35:07.994732 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 21:35:07.994805 kernel: GPT:9289727 != 16777215 Aug 5 21:35:07.994832 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 21:35:07.994859 kernel: GPT:9289727 != 16777215 Aug 5 21:35:07.997311 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 21:35:07.997384 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:35:08.019092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:35:08.036394 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:35:08.100561 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:35:08.172025 kernel: BTRFS: device fsid 8a9ab799-ab52-4671-9234-72d7c6e57b99 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (541) Aug 5 21:35:08.180721 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 5 21:35:08.190984 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (515) Aug 5 21:35:08.284836 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 5 21:35:08.324198 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 5 21:35:08.325005 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 5 21:35:08.340355 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 21:35:08.358365 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 21:35:08.379858 disk-uuid[660]: Primary Header is updated. Aug 5 21:35:08.379858 disk-uuid[660]: Secondary Entries is updated. Aug 5 21:35:08.379858 disk-uuid[660]: Secondary Header is updated. Aug 5 21:35:08.388745 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:35:08.409019 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:35:08.415991 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:35:09.425979 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:35:09.429126 disk-uuid[662]: The operation has completed successfully. Aug 5 21:35:09.643577 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 21:35:09.645024 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 21:35:09.684488 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 21:35:09.694530 sh[1005]: Success Aug 5 21:35:09.728996 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 21:35:09.868473 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 21:35:09.875079 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 21:35:09.880051 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 21:35:09.914458 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9ab799-ab52-4671-9234-72d7c6e57b99 Aug 5 21:35:09.914577 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:35:09.914629 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 21:35:09.917167 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 21:35:09.917205 kernel: BTRFS info (device dm-0): using free space tree Aug 5 21:35:10.033998 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 5 21:35:10.058478 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 21:35:10.061476 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 21:35:10.079371 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 21:35:10.088715 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 21:35:10.113462 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:35:10.113583 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:35:10.113618 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 21:35:10.122729 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 21:35:10.147685 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 21:35:10.150310 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:35:10.169486 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 21:35:10.184994 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 21:35:10.326202 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:35:10.338311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:35:10.404839 systemd-networkd[1197]: lo: Link UP Aug 5 21:35:10.404865 systemd-networkd[1197]: lo: Gained carrier Aug 5 21:35:10.409814 systemd-networkd[1197]: Enumeration completed Aug 5 21:35:10.411837 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:35:10.411845 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:35:10.411896 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:35:10.421056 systemd-networkd[1197]: eth0: Link UP Aug 5 21:35:10.421066 systemd-networkd[1197]: eth0: Gained carrier Aug 5 21:35:10.421095 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:35:10.432324 systemd[1]: Reached target network.target - Network. Aug 5 21:35:10.442047 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.17.56/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 21:35:10.628761 ignition[1106]: Ignition 2.19.0 Aug 5 21:35:10.630804 ignition[1106]: Stage: fetch-offline Aug 5 21:35:10.631434 ignition[1106]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:10.631458 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:10.633859 ignition[1106]: Ignition finished successfully Aug 5 21:35:10.642085 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:35:10.654354 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 21:35:10.689823 ignition[1209]: Ignition 2.19.0 Aug 5 21:35:10.689860 ignition[1209]: Stage: fetch Aug 5 21:35:10.691193 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:10.691232 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:10.691441 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:10.700995 ignition[1209]: PUT result: OK Aug 5 21:35:10.704540 ignition[1209]: parsed url from cmdline: "" Aug 5 21:35:10.704702 ignition[1209]: no config URL provided Aug 5 21:35:10.704722 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 21:35:10.704776 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Aug 5 21:35:10.704810 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:10.712530 ignition[1209]: PUT result: OK Aug 5 21:35:10.712640 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 5 21:35:10.716773 ignition[1209]: GET result: OK Aug 5 21:35:10.717086 ignition[1209]: parsing config with SHA512: 67448082fc9d1f088644cca654c07191c6d6fedac86c828f72e11aa1082ce1bd482cd59b8748d4f8ea0b533a09e98e387a4bf5349869e8d6fa3254b46b99a7bf Aug 5 21:35:10.729972 unknown[1209]: fetched base config from "system" Aug 5 21:35:10.730251 unknown[1209]: fetched base config from "system" Aug 5 21:35:10.732053 ignition[1209]: fetch: fetch complete Aug 5 21:35:10.730275 unknown[1209]: fetched user config from "aws" Aug 5 21:35:10.732073 ignition[1209]: fetch: fetch passed Aug 5 21:35:10.732211 ignition[1209]: Ignition finished successfully Aug 5 21:35:10.746060 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 21:35:10.757956 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 21:35:10.802829 ignition[1216]: Ignition 2.19.0 Aug 5 21:35:10.802861 ignition[1216]: Stage: kargs Aug 5 21:35:10.804614 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:10.804714 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:10.805739 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:10.811582 ignition[1216]: PUT result: OK Aug 5 21:35:10.817151 ignition[1216]: kargs: kargs passed Aug 5 21:35:10.817291 ignition[1216]: Ignition finished successfully Aug 5 21:35:10.823058 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 21:35:10.835591 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 21:35:10.880765 ignition[1223]: Ignition 2.19.0 Aug 5 21:35:10.880802 ignition[1223]: Stage: disks Aug 5 21:35:10.882680 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:10.882717 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:10.882956 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:10.886331 ignition[1223]: PUT result: OK Aug 5 21:35:10.894446 ignition[1223]: disks: disks passed Aug 5 21:35:10.894768 ignition[1223]: Ignition finished successfully Aug 5 21:35:10.899613 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 21:35:10.903370 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 21:35:10.906231 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 21:35:10.923729 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:35:10.929336 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:35:10.931185 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:35:10.953309 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 21:35:11.008712 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 21:35:11.017504 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 21:35:11.035253 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 21:35:11.135024 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ec701988-3dff-4e7d-a2a2-79d78965de5d r/w with ordered data mode. Quota mode: none. Aug 5 21:35:11.136710 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 21:35:11.140405 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 21:35:11.169234 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:35:11.185256 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 21:35:11.192436 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 21:35:11.192579 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 21:35:11.192648 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:35:11.210042 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1251) Aug 5 21:35:11.214681 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:35:11.214845 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:35:11.214874 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 21:35:11.219965 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 21:35:11.223831 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:35:11.231344 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 21:35:11.241421 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 21:35:11.719194 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 21:35:11.727734 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Aug 5 21:35:11.736489 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 21:35:11.757076 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 21:35:12.127544 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 21:35:12.142196 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 21:35:12.147330 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 21:35:12.166800 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 21:35:12.169324 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:35:12.196191 systemd-networkd[1197]: eth0: Gained IPv6LL Aug 5 21:35:12.217077 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 21:35:12.230549 ignition[1364]: INFO : Ignition 2.19.0 Aug 5 21:35:12.234032 ignition[1364]: INFO : Stage: mount Aug 5 21:35:12.234032 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:12.234032 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:12.234032 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:12.242576 ignition[1364]: INFO : PUT result: OK Aug 5 21:35:12.248062 ignition[1364]: INFO : mount: mount passed Aug 5 21:35:12.250501 ignition[1364]: INFO : Ignition finished successfully Aug 5 21:35:12.255370 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 21:35:12.267126 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 21:35:12.306365 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:35:12.331959 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1376) Aug 5 21:35:12.335777 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:35:12.335895 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:35:12.337105 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 21:35:12.343028 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 21:35:12.347621 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:35:12.404958 ignition[1393]: INFO : Ignition 2.19.0 Aug 5 21:35:12.404958 ignition[1393]: INFO : Stage: files Aug 5 21:35:12.408235 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:12.408235 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:12.412691 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:12.415815 ignition[1393]: INFO : PUT result: OK Aug 5 21:35:12.420644 ignition[1393]: DEBUG : files: compiled without relabeling support, skipping Aug 5 21:35:12.434331 ignition[1393]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 21:35:12.434331 ignition[1393]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 21:35:12.455255 ignition[1393]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 21:35:12.458092 ignition[1393]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 21:35:12.460587 ignition[1393]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 21:35:12.460541 unknown[1393]: wrote ssh authorized keys file for user: core Aug 5 21:35:12.466182 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:35:12.466182 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 21:35:12.560549 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 21:35:12.675176 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:35:12.675176 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 21:35:12.675176 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 21:35:12.675176 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:35:12.689251 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:35:12.689251 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:35:12.689251 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:35:12.689251 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:35:12.689251 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:35:12.689251 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:35:12.689251 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:35:12.689251 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:35:12.689251 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:35:12.689251 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:35:12.689251 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Aug 5 21:35:13.212287 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 21:35:14.520204 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:35:14.520204 ignition[1393]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 21:35:14.532357 ignition[1393]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:35:14.532357 ignition[1393]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:35:14.532357 ignition[1393]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 21:35:14.532357 ignition[1393]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 21:35:14.532357 ignition[1393]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 21:35:14.532357 ignition[1393]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:35:14.532357 ignition[1393]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:35:14.532357 ignition[1393]: INFO : files: files passed Aug 5 21:35:14.532357 ignition[1393]: INFO : Ignition finished successfully Aug 5 21:35:14.537016 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 21:35:14.573347 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 21:35:14.578422 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 21:35:14.587486 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 21:35:14.587690 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 21:35:14.625997 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:35:14.625997 initrd-setup-root-after-ignition[1422]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:35:14.633274 initrd-setup-root-after-ignition[1426]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:35:14.640598 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:35:14.644810 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 21:35:14.663019 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 21:35:14.746387 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 21:35:14.747205 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 21:35:14.755422 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 21:35:14.758056 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 21:35:14.764291 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 21:35:14.781790 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 21:35:14.831190 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:35:14.843254 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 21:35:14.888162 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 21:35:14.890086 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 21:35:14.895413 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:35:14.898314 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:35:14.900566 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 21:35:14.903894 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 21:35:14.904083 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:35:14.906728 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 21:35:14.908585 systemd[1]: Stopped target basic.target - Basic System. Aug 5 21:35:14.910305 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 21:35:14.912408 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:35:14.914732 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 21:35:14.918803 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 21:35:14.921648 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:35:14.942359 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 21:35:14.944173 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 21:35:14.945949 systemd[1]: Stopped target swap.target - Swaps. Aug 5 21:35:14.947409 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 21:35:14.947534 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:35:14.949683 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:35:14.956056 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:35:14.956582 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 21:35:14.966751 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:35:14.968843 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 21:35:14.969082 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 21:35:14.976728 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 21:35:14.976832 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:35:14.978999 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 21:35:14.979083 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 21:35:14.992224 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 21:35:14.993597 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 21:35:14.993818 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:35:14.997171 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 21:35:14.997292 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 21:35:14.997397 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:35:14.997956 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 21:35:14.998038 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:35:15.053964 ignition[1447]: INFO : Ignition 2.19.0 Aug 5 21:35:15.053964 ignition[1447]: INFO : Stage: umount Aug 5 21:35:15.060289 ignition[1447]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:15.060289 ignition[1447]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:15.060289 ignition[1447]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:15.067949 ignition[1447]: INFO : PUT result: OK Aug 5 21:35:15.079795 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 21:35:15.084545 ignition[1447]: INFO : umount: umount passed Aug 5 21:35:15.084545 ignition[1447]: INFO : Ignition finished successfully Aug 5 21:35:15.088799 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 21:35:15.089137 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 21:35:15.097745 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 21:35:15.097847 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 21:35:15.100258 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 21:35:15.100372 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 21:35:15.116444 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 21:35:15.116569 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 21:35:15.122367 systemd[1]: Stopped target network.target - Network. Aug 5 21:35:15.125570 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 21:35:15.125742 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:35:15.129896 systemd[1]: Stopped target paths.target - Path Units. Aug 5 21:35:15.133548 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 21:35:15.135081 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:35:15.139168 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 21:35:15.140907 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 21:35:15.142835 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 21:35:15.142999 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:35:15.147555 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 21:35:15.147676 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:35:15.151112 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 21:35:15.151244 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 21:35:15.153364 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 21:35:15.153543 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 21:35:15.157411 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 21:35:15.166423 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 21:35:15.170429 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 21:35:15.170680 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 21:35:15.175432 systemd-networkd[1197]: eth0: DHCPv6 lease lost Aug 5 21:35:15.177883 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 21:35:15.178087 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 21:35:15.186800 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 21:35:15.187567 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 21:35:15.192586 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 21:35:15.193174 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 21:35:15.211579 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 21:35:15.211704 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:35:15.230313 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 21:35:15.234501 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 21:35:15.234702 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:35:15.248142 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 21:35:15.248318 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:35:15.251129 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 21:35:15.251229 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 21:35:15.253306 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 21:35:15.253423 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:35:15.260119 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:35:15.313095 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 21:35:15.313447 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:35:15.320646 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 21:35:15.320747 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 21:35:15.323671 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 21:35:15.323745 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:35:15.325534 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 21:35:15.325627 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:35:15.326448 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 21:35:15.326547 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 21:35:15.327723 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:35:15.327805 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:35:15.336056 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 21:35:15.348665 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 21:35:15.348864 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:35:15.361600 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:35:15.361739 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:35:15.370884 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 21:35:15.371726 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 21:35:15.396210 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 21:35:15.396674 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 21:35:15.403339 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 21:35:15.413478 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 21:35:15.449771 systemd[1]: Switching root. Aug 5 21:35:15.509052 systemd-journald[250]: Journal stopped Aug 5 21:35:19.202179 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Aug 5 21:35:19.202368 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 21:35:19.202430 kernel: SELinux: policy capability open_perms=1 Aug 5 21:35:19.202478 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 21:35:19.202515 kernel: SELinux: policy capability always_check_network=0 Aug 5 21:35:19.202553 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 21:35:19.202588 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 21:35:19.202626 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 21:35:19.202657 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 21:35:19.202691 kernel: audit: type=1403 audit(1722893717.103:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 21:35:19.205633 systemd[1]: Successfully loaded SELinux policy in 84.063ms. Aug 5 21:35:19.205759 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.182ms. Aug 5 21:35:19.205809 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:35:19.205846 systemd[1]: Detected virtualization amazon. Aug 5 21:35:19.205879 systemd[1]: Detected architecture arm64. Aug 5 21:35:19.205912 systemd[1]: Detected first boot. Aug 5 21:35:19.205997 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:35:19.206033 zram_generator::config[1488]: No configuration found. Aug 5 21:35:19.206070 systemd[1]: Populated /etc with preset unit settings. Aug 5 21:35:19.206106 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 21:35:19.206145 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 21:35:19.206181 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 21:35:19.206219 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 21:35:19.206253 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 21:35:19.206286 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 21:35:19.206327 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 21:35:19.206360 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 21:35:19.206393 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 21:35:19.206427 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 21:35:19.206467 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 21:35:19.206501 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:35:19.206537 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:35:19.206569 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 21:35:19.206605 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 21:35:19.206637 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 21:35:19.206676 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:35:19.206706 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 21:35:19.206744 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:35:19.206780 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 21:35:19.206815 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 21:35:19.206852 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 21:35:19.206889 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 21:35:19.206922 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:35:19.206998 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:35:19.207034 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:35:19.207075 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:35:19.207109 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 21:35:19.207143 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 21:35:19.207176 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:35:19.207206 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:35:19.207240 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:35:19.207271 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 21:35:19.207305 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 21:35:19.207338 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 21:35:19.207370 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 21:35:19.207406 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 21:35:19.207438 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 21:35:19.207468 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 21:35:19.207506 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 21:35:19.207540 systemd[1]: Reached target machines.target - Containers. Aug 5 21:35:19.207570 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 21:35:19.207604 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:35:19.207642 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:35:19.207679 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 21:35:19.207713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:35:19.207744 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:35:19.207776 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:35:19.207806 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 21:35:19.207840 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:35:19.207872 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 21:35:19.207902 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 21:35:19.208047 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 21:35:19.208087 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 21:35:19.208119 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 21:35:19.208149 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:35:19.208180 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:35:19.208211 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 21:35:19.208245 kernel: fuse: init (API version 7.39) Aug 5 21:35:19.208278 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 21:35:19.208314 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:35:19.213078 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 21:35:19.213128 systemd[1]: Stopped verity-setup.service. Aug 5 21:35:19.213163 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 21:35:19.213195 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 21:35:19.213226 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 21:35:19.213259 kernel: ACPI: bus type drm_connector registered Aug 5 21:35:19.213292 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 21:35:19.213324 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 21:35:19.213355 kernel: loop: module loaded Aug 5 21:35:19.213400 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 21:35:19.213434 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:35:19.213466 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 21:35:19.213499 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 21:35:19.213530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:35:19.213568 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:35:19.213604 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:35:19.213639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:35:19.213698 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:35:19.213741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:35:19.213774 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 21:35:19.213817 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 21:35:19.213849 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:35:19.213880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:35:19.213922 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 21:35:19.214125 systemd-journald[1565]: Collecting audit messages is disabled. Aug 5 21:35:19.214205 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:35:19.214238 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 21:35:19.214278 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 21:35:19.214313 systemd-journald[1565]: Journal started Aug 5 21:35:19.214365 systemd-journald[1565]: Runtime Journal (/run/log/journal/ec280715800cbb45a2005ebda5c1c5e3) is 8.0M, max 75.3M, 67.3M free. Aug 5 21:35:18.551100 systemd[1]: Queued start job for default target multi-user.target. Aug 5 21:35:18.616677 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 5 21:35:18.617537 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 21:35:19.239431 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 21:35:19.258212 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 21:35:19.258456 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 21:35:19.264665 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:35:19.273772 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 21:35:19.291233 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 21:35:19.304062 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 21:35:19.304190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:35:19.320173 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 21:35:19.327030 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:35:19.343955 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 21:35:19.344095 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:35:19.370833 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:35:19.385586 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 21:35:19.393094 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:35:19.406391 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 21:35:19.410731 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 21:35:19.415060 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 21:35:19.502146 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 21:35:19.515563 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 21:35:19.522082 kernel: loop0: detected capacity change from 0 to 193208 Aug 5 21:35:19.522253 kernel: block loop0: the capability attribute has been deprecated. Aug 5 21:35:19.525742 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 21:35:19.533404 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 21:35:19.537995 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 21:35:19.563277 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 21:35:19.568065 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:35:19.592011 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 21:35:19.631889 systemd-journald[1565]: Time spent on flushing to /var/log/journal/ec280715800cbb45a2005ebda5c1c5e3 is 106.983ms for 913 entries. Aug 5 21:35:19.631889 systemd-journald[1565]: System Journal (/var/log/journal/ec280715800cbb45a2005ebda5c1c5e3) is 8.0M, max 195.6M, 187.6M free. Aug 5 21:35:19.754211 systemd-journald[1565]: Received client request to flush runtime journal. Aug 5 21:35:19.754296 kernel: loop1: detected capacity change from 0 to 59688 Aug 5 21:35:19.678729 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:35:19.693368 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 21:35:19.773401 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 21:35:19.777233 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 5 21:35:19.787649 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 21:35:19.790390 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 21:35:19.795004 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 21:35:19.809742 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:35:19.832033 kernel: loop2: detected capacity change from 0 to 51896 Aug 5 21:35:19.885922 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Aug 5 21:35:19.886007 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Aug 5 21:35:19.903485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:35:19.961973 kernel: loop3: detected capacity change from 0 to 113712 Aug 5 21:35:20.061109 kernel: loop4: detected capacity change from 0 to 193208 Aug 5 21:35:20.088029 kernel: loop5: detected capacity change from 0 to 59688 Aug 5 21:35:20.103410 kernel: loop6: detected capacity change from 0 to 51896 Aug 5 21:35:20.121073 kernel: loop7: detected capacity change from 0 to 113712 Aug 5 21:35:20.138920 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 5 21:35:20.142114 (sd-merge)[1641]: Merged extensions into '/usr'. Aug 5 21:35:20.154509 systemd[1]: Reloading requested from client PID 1595 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 21:35:20.154547 systemd[1]: Reloading... Aug 5 21:35:20.434302 zram_generator::config[1665]: No configuration found. Aug 5 21:35:20.781451 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:35:20.916534 systemd[1]: Reloading finished in 760 ms. Aug 5 21:35:20.963841 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 21:35:20.979455 systemd[1]: Starting ensure-sysext.service... Aug 5 21:35:20.993575 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:35:21.022443 systemd[1]: Reloading requested from client PID 1716 ('systemctl') (unit ensure-sysext.service)... Aug 5 21:35:21.022503 systemd[1]: Reloading... Aug 5 21:35:21.090295 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 21:35:21.092011 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 21:35:21.094509 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 21:35:21.095352 systemd-tmpfiles[1717]: ACLs are not supported, ignoring. Aug 5 21:35:21.095663 systemd-tmpfiles[1717]: ACLs are not supported, ignoring. Aug 5 21:35:21.106347 systemd-tmpfiles[1717]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:35:21.107035 systemd-tmpfiles[1717]: Skipping /boot Aug 5 21:35:21.163432 systemd-tmpfiles[1717]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:35:21.165197 systemd-tmpfiles[1717]: Skipping /boot Aug 5 21:35:21.209031 ldconfig[1587]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 21:35:21.225281 zram_generator::config[1740]: No configuration found. Aug 5 21:35:21.523731 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:35:21.640058 systemd[1]: Reloading finished in 616 ms. Aug 5 21:35:21.675907 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 21:35:21.679371 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 21:35:21.690087 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:35:21.719614 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:35:21.727530 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 21:35:21.740474 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 21:35:21.751362 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:35:21.769407 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:35:21.778693 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 21:35:21.793610 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:35:21.807870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:35:21.833393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:35:21.859904 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:35:21.863072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:35:21.872529 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 21:35:21.884407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:35:21.884771 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:35:21.888254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:35:21.888606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:35:21.898348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:35:21.911330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:35:21.927218 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:35:21.930500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:35:21.931187 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 21:35:21.939626 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 21:35:21.943538 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:35:21.946796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:35:21.976136 systemd[1]: Finished ensure-sysext.service. Aug 5 21:35:21.981687 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:35:21.983322 systemd-udevd[1804]: Using default interface naming scheme 'v255'. Aug 5 21:35:21.997698 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 21:35:22.001509 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 21:35:22.017032 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:35:22.017594 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:35:22.051053 augenrules[1833]: No rules Aug 5 21:35:22.052307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:35:22.052719 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:35:22.056029 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:35:22.062492 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:35:22.085349 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:35:22.086078 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:35:22.089315 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 21:35:22.106340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:35:22.125529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:35:22.169400 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 21:35:22.185086 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 21:35:22.197599 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 21:35:22.328406 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 21:35:22.352738 (udev-worker)[1863]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:35:22.378976 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1856) Aug 5 21:35:22.433489 systemd-networkd[1845]: lo: Link UP Aug 5 21:35:22.433516 systemd-networkd[1845]: lo: Gained carrier Aug 5 21:35:22.438336 systemd-networkd[1845]: Enumeration completed Aug 5 21:35:22.438852 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:35:22.446087 systemd-networkd[1845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:35:22.446108 systemd-networkd[1845]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:35:22.449054 systemd-networkd[1845]: eth0: Link UP Aug 5 21:35:22.449364 systemd-networkd[1845]: eth0: Gained carrier Aug 5 21:35:22.449409 systemd-networkd[1845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:35:22.464113 systemd-networkd[1845]: eth0: DHCPv4 address 172.31.17.56/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 21:35:22.474433 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 21:35:22.504192 systemd-resolved[1803]: Positive Trust Anchors: Aug 5 21:35:22.504220 systemd-resolved[1803]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:35:22.504280 systemd-resolved[1803]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:35:22.515628 systemd-resolved[1803]: Defaulting to hostname 'linux'. Aug 5 21:35:22.519659 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:35:22.521842 systemd[1]: Reached target network.target - Network. Aug 5 21:35:22.523429 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:35:22.581677 systemd-networkd[1845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:35:22.604023 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1849) Aug 5 21:35:22.901716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:35:22.916621 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 21:35:22.925470 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 21:35:22.940857 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 21:35:22.958032 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 21:35:23.002518 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 21:35:23.005476 lvm[1969]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:35:23.044923 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 21:35:23.048151 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:35:23.059359 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 21:35:23.084011 lvm[1975]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:35:23.086322 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:35:23.089309 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:35:23.091915 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 21:35:23.094421 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 21:35:23.111258 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 21:35:23.113827 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 21:35:23.116317 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 21:35:23.118530 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 21:35:23.118590 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:35:23.123198 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:35:23.127201 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 21:35:23.132694 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 21:35:23.149744 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 21:35:23.153557 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 21:35:23.156503 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 21:35:23.160236 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:35:23.162340 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:35:23.165082 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:35:23.165355 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:35:23.174125 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 21:35:23.186670 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 21:35:23.207343 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 21:35:23.225386 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 21:35:23.233368 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 21:35:23.235348 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 21:35:23.241473 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 21:35:23.251328 systemd[1]: Started ntpd.service - Network Time Service. Aug 5 21:35:23.267227 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 21:35:23.278144 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 5 21:35:23.292412 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 21:35:23.302428 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 21:35:23.317250 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 21:35:23.320766 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 21:35:23.321812 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 21:35:23.327428 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 21:35:23.337209 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 21:35:23.384001 jq[1984]: false Aug 5 21:35:23.397804 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 21:35:23.400645 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 21:35:23.415262 (ntainerd)[2007]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 21:35:23.432219 dbus-daemon[1983]: [system] SELinux support is enabled Aug 5 21:35:23.437117 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 21:35:23.443433 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:53:08 UTC 2024 (1): Starting Aug 5 21:35:23.443433 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 21:35:23.437355 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:53:08 UTC 2024 (1): Starting Aug 5 21:35:23.440639 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 21:35:23.447684 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: ---------------------------------------------------- Aug 5 21:35:23.447684 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Aug 5 21:35:23.447684 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 21:35:23.447684 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: corporation. Support and training for ntp-4 are Aug 5 21:35:23.447684 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: available at https://www.nwtime.org/support Aug 5 21:35:23.447684 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: ---------------------------------------------------- Aug 5 21:35:23.437456 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 21:35:23.450142 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 21:35:23.437481 ntpd[1987]: ---------------------------------------------------- Aug 5 21:35:23.444056 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Aug 5 21:35:23.444092 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 21:35:23.444114 ntpd[1987]: corporation. Support and training for ntp-4 are Aug 5 21:35:23.444135 ntpd[1987]: available at https://www.nwtime.org/support Aug 5 21:35:23.444154 ntpd[1987]: ---------------------------------------------------- Aug 5 21:35:23.454154 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 21:35:23.454362 dbus-daemon[1983]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1845 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 5 21:35:23.459292 ntpd[1987]: proto: precision = 0.096 usec (-23) Aug 5 21:35:23.462234 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: proto: precision = 0.096 usec (-23) Aug 5 21:35:23.462234 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: basedate set to 2024-07-24 Aug 5 21:35:23.462234 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: gps base set to 2024-07-28 (week 2325) Aug 5 21:35:23.460921 ntpd[1987]: basedate set to 2024-07-24 Aug 5 21:35:23.460997 ntpd[1987]: gps base set to 2024-07-28 (week 2325) Aug 5 21:35:23.467738 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 21:35:23.470484 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 21:35:23.470707 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 21:35:23.470827 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 21:35:23.471289 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 21:35:23.471435 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 21:35:23.471557 ntpd[1987]: Listen normally on 3 eth0 172.31.17.56:123 Aug 5 21:35:23.471693 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: Listen normally on 3 eth0 172.31.17.56:123 Aug 5 21:35:23.471821 ntpd[1987]: Listen normally on 4 lo [::1]:123 Aug 5 21:35:23.471965 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: Listen normally on 4 lo [::1]:123 Aug 5 21:35:23.474070 ntpd[1987]: bind(21) AF_INET6 fe80::401:feff:fe4b:56e1%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 21:35:23.475386 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: bind(21) AF_INET6 fe80::401:feff:fe4b:56e1%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 21:35:23.475386 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: unable to create socket on eth0 (5) for fe80::401:feff:fe4b:56e1%2#123 Aug 5 21:35:23.475386 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: failed to init interface for address fe80::401:feff:fe4b:56e1%2 Aug 5 21:35:23.475386 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Aug 5 21:35:23.474130 ntpd[1987]: unable to create socket on eth0 (5) for fe80::401:feff:fe4b:56e1%2#123 Aug 5 21:35:23.474160 ntpd[1987]: failed to init interface for address fe80::401:feff:fe4b:56e1%2 Aug 5 21:35:23.474253 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Aug 5 21:35:23.482474 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 21:35:23.482668 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 21:35:23.482771 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 21:35:23.482891 ntpd[1987]: 5 Aug 21:35:23 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 21:35:23.482980 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 21:35:23.485053 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 21:35:23.500652 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 21:35:23.500728 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 21:35:23.506017 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 21:35:23.506076 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 21:35:23.508824 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 5 21:35:23.527517 jq[1998]: true Aug 5 21:35:23.548256 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 5 21:35:23.637700 tar[2001]: linux-arm64/helm Aug 5 21:35:23.694743 coreos-metadata[1982]: Aug 05 21:35:23.692 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 21:35:23.704378 jq[2023]: true Aug 5 21:35:23.708044 coreos-metadata[1982]: Aug 05 21:35:23.707 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 5 21:35:23.712809 coreos-metadata[1982]: Aug 05 21:35:23.710 INFO Fetch successful Aug 5 21:35:23.712809 coreos-metadata[1982]: Aug 05 21:35:23.710 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 5 21:35:23.721137 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 5 21:35:23.725203 coreos-metadata[1982]: Aug 05 21:35:23.721 INFO Fetch successful Aug 5 21:35:23.725203 coreos-metadata[1982]: Aug 05 21:35:23.722 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 5 21:35:23.730083 extend-filesystems[1985]: Found loop4 Aug 5 21:35:23.730083 extend-filesystems[1985]: Found loop5 Aug 5 21:35:23.730083 extend-filesystems[1985]: Found loop6 Aug 5 21:35:23.730083 extend-filesystems[1985]: Found loop7 Aug 5 21:35:23.730083 extend-filesystems[1985]: Found nvme0n1 Aug 5 21:35:23.730083 extend-filesystems[1985]: Found nvme0n1p1 Aug 5 21:35:23.730083 extend-filesystems[1985]: Found nvme0n1p2 Aug 5 21:35:23.730083 extend-filesystems[1985]: Found nvme0n1p3 Aug 5 21:35:23.730083 extend-filesystems[1985]: Found usr Aug 5 21:35:23.766835 extend-filesystems[1985]: Found nvme0n1p4 Aug 5 21:35:23.766835 extend-filesystems[1985]: Found nvme0n1p6 Aug 5 21:35:23.766835 extend-filesystems[1985]: Found nvme0n1p7 Aug 5 21:35:23.766835 extend-filesystems[1985]: Found nvme0n1p9 Aug 5 21:35:23.766835 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.736 INFO Fetch successful Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.736 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.746 INFO Fetch successful Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.746 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.748 INFO Fetch failed with 404: resource not found Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.751 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.761 INFO Fetch successful Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.764 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.773 INFO Fetch successful Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.773 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.778 INFO Fetch successful Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.778 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.780 INFO Fetch successful Aug 5 21:35:23.780260 coreos-metadata[1982]: Aug 05 21:35:23.780 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 5 21:35:23.788501 coreos-metadata[1982]: Aug 05 21:35:23.786 INFO Fetch successful Aug 5 21:35:23.797218 update_engine[1995]: I0805 21:35:23.794502 1995 main.cc:92] Flatcar Update Engine starting Aug 5 21:35:23.817584 update_engine[1995]: I0805 21:35:23.812533 1995 update_check_scheduler.cc:74] Next update check in 8m30s Aug 5 21:35:23.810065 systemd[1]: Started update-engine.service - Update Engine. Aug 5 21:35:23.833290 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 21:35:23.870148 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Aug 5 21:35:23.889011 extend-filesystems[2046]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 21:35:23.908195 systemd-networkd[1845]: eth0: Gained IPv6LL Aug 5 21:35:23.917461 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 5 21:35:23.931211 systemd-logind[1994]: Watching system buttons on /dev/input/event0 (Power Button) Aug 5 21:35:23.931283 systemd-logind[1994]: Watching system buttons on /dev/input/event1 (Sleep Button) Aug 5 21:35:23.936222 systemd-logind[1994]: New seat seat0. Aug 5 21:35:23.939105 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 21:35:23.961088 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 21:35:24.026621 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 5 21:35:24.089515 extend-filesystems[2046]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 5 21:35:24.089515 extend-filesystems[2046]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 21:35:24.089515 extend-filesystems[2046]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 5 21:35:24.104994 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Aug 5 21:35:24.098575 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 5 21:35:24.118965 bash[2067]: Updated "/home/core/.ssh/authorized_keys" Aug 5 21:35:24.134316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:35:24.158527 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 21:35:24.162802 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 21:35:24.167273 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 21:35:24.167920 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 21:35:24.175094 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 21:35:24.238081 systemd[1]: Starting sshkeys.service... Aug 5 21:35:24.390338 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1850) Aug 5 21:35:24.492074 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 21:35:24.496223 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 21:35:24.507782 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 21:35:24.520270 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 5 21:35:24.529076 amazon-ssm-agent[2060]: Initializing new seelog logger Aug 5 21:35:24.529076 amazon-ssm-agent[2060]: New Seelog Logger Creation Complete Aug 5 21:35:24.529076 amazon-ssm-agent[2060]: 2024/08/05 21:35:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:35:24.529076 amazon-ssm-agent[2060]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:35:24.529076 amazon-ssm-agent[2060]: 2024/08/05 21:35:24 processing appconfig overrides Aug 5 21:35:24.529567 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 5 21:35:24.546311 amazon-ssm-agent[2060]: 2024/08/05 21:35:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:35:24.546311 amazon-ssm-agent[2060]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:35:24.546523 amazon-ssm-agent[2060]: 2024/08/05 21:35:24 processing appconfig overrides Aug 5 21:35:24.546853 amazon-ssm-agent[2060]: 2024/08/05 21:35:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:35:24.546853 amazon-ssm-agent[2060]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:35:24.546980 amazon-ssm-agent[2060]: 2024/08/05 21:35:24 processing appconfig overrides Aug 5 21:35:24.561985 amazon-ssm-agent[2060]: 2024-08-05 21:35:24 INFO Proxy environment variables: Aug 5 21:35:24.578814 amazon-ssm-agent[2060]: 2024/08/05 21:35:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:35:24.578814 amazon-ssm-agent[2060]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:35:24.578814 amazon-ssm-agent[2060]: 2024/08/05 21:35:24 processing appconfig overrides Aug 5 21:35:24.585697 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 5 21:35:24.585990 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 5 21:35:24.599437 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2021 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 5 21:35:24.631484 systemd[1]: Starting polkit.service - Authorization Manager... Aug 5 21:35:24.663527 amazon-ssm-agent[2060]: 2024-08-05 21:35:24 INFO https_proxy: Aug 5 21:35:24.673995 containerd[2007]: time="2024-08-05T21:35:24.672501015Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 21:35:24.730719 polkitd[2125]: Started polkitd version 121 Aug 5 21:35:24.762634 polkitd[2125]: Loading rules from directory /etc/polkit-1/rules.d Aug 5 21:35:24.762849 polkitd[2125]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 5 21:35:24.766471 polkitd[2125]: Finished loading, compiling and executing 2 rules Aug 5 21:35:24.779974 amazon-ssm-agent[2060]: 2024-08-05 21:35:24 INFO http_proxy: Aug 5 21:35:24.775190 systemd[1]: Started polkit.service - Authorization Manager. Aug 5 21:35:24.774853 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 5 21:35:24.776606 polkitd[2125]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 5 21:35:24.820217 locksmithd[2036]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 21:35:24.858917 systemd-resolved[1803]: System hostname changed to 'ip-172-31-17-56'. Aug 5 21:35:24.858966 systemd-hostnamed[2021]: Hostname set to (transient) Aug 5 21:35:24.878565 amazon-ssm-agent[2060]: 2024-08-05 21:35:24 INFO no_proxy: Aug 5 21:35:24.914684 containerd[2007]: time="2024-08-05T21:35:24.912546556Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 21:35:24.919435 containerd[2007]: time="2024-08-05T21:35:24.919056652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:35:24.929772 containerd[2007]: time="2024-08-05T21:35:24.929138788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:35:24.936485 containerd[2007]: time="2024-08-05T21:35:24.936382348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:35:24.937683 containerd[2007]: time="2024-08-05T21:35:24.937595728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:35:24.941378 containerd[2007]: time="2024-08-05T21:35:24.940832584Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 21:35:24.950176 containerd[2007]: time="2024-08-05T21:35:24.950004424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 21:35:24.965976 containerd[2007]: time="2024-08-05T21:35:24.956945392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:35:24.965976 containerd[2007]: time="2024-08-05T21:35:24.960271828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 21:35:24.965976 containerd[2007]: time="2024-08-05T21:35:24.960644248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:35:24.975307 amazon-ssm-agent[2060]: 2024-08-05 21:35:24 INFO Checking if agent identity type OnPrem can be assumed Aug 5 21:35:24.984900 containerd[2007]: time="2024-08-05T21:35:24.984603172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 21:35:24.984900 containerd[2007]: time="2024-08-05T21:35:24.984745084Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 21:35:24.984900 containerd[2007]: time="2024-08-05T21:35:24.984776992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:35:24.998327 containerd[2007]: time="2024-08-05T21:35:24.988715248Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:35:24.998327 containerd[2007]: time="2024-08-05T21:35:24.991042600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 21:35:24.998327 containerd[2007]: time="2024-08-05T21:35:24.995388160Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 21:35:24.998327 containerd[2007]: time="2024-08-05T21:35:24.995491240Z" level=info msg="metadata content store policy set" policy=shared Aug 5 21:35:25.010777 containerd[2007]: time="2024-08-05T21:35:25.010409160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 21:35:25.010777 containerd[2007]: time="2024-08-05T21:35:25.010510548Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 21:35:25.010777 containerd[2007]: time="2024-08-05T21:35:25.010545540Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 21:35:25.010777 containerd[2007]: time="2024-08-05T21:35:25.010634916Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.010670952Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011160324Z" level=info msg="NRI interface is disabled by configuration." Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011190924Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011450532Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011487192Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011518032Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011548572Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011582064Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011619888Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011650464Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011681880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011712312Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011750592Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 21:35:25.014986 containerd[2007]: time="2024-08-05T21:35:25.011780184Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 21:35:25.015752 containerd[2007]: time="2024-08-05T21:35:25.011809968Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 21:35:25.015752 containerd[2007]: time="2024-08-05T21:35:25.012126432Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 21:35:25.015752 containerd[2007]: time="2024-08-05T21:35:25.012757440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 21:35:25.015752 containerd[2007]: time="2024-08-05T21:35:25.012825804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.015752 containerd[2007]: time="2024-08-05T21:35:25.012872268Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 21:35:25.022373 containerd[2007]: time="2024-08-05T21:35:25.022237716Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.024860268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025040808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025074120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025108188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025140036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025178688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025214964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025248324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025286160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025725864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025765956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025800516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025833012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025863468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.033973 containerd[2007]: time="2024-08-05T21:35:25.025896012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.034724 containerd[2007]: time="2024-08-05T21:35:25.025924860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.034724 containerd[2007]: time="2024-08-05T21:35:25.025974792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 21:35:25.034830 containerd[2007]: time="2024-08-05T21:35:25.026438052Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 21:35:25.034830 containerd[2007]: time="2024-08-05T21:35:25.026550948Z" level=info msg="Connect containerd service" Aug 5 21:35:25.034830 containerd[2007]: time="2024-08-05T21:35:25.026610600Z" level=info msg="using legacy CRI server" Aug 5 21:35:25.034830 containerd[2007]: time="2024-08-05T21:35:25.026629356Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 21:35:25.034830 containerd[2007]: time="2024-08-05T21:35:25.026808000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 21:35:25.047764 containerd[2007]: time="2024-08-05T21:35:25.043761445Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:35:25.049094 containerd[2007]: time="2024-08-05T21:35:25.048509533Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 21:35:25.049094 containerd[2007]: time="2024-08-05T21:35:25.048560257Z" level=info msg="Start subscribing containerd event" Aug 5 21:35:25.049094 containerd[2007]: time="2024-08-05T21:35:25.048805309Z" level=info msg="Start recovering state" Aug 5 21:35:25.056395 containerd[2007]: time="2024-08-05T21:35:25.052064953Z" level=info msg="Start event monitor" Aug 5 21:35:25.056395 containerd[2007]: time="2024-08-05T21:35:25.052135117Z" level=info msg="Start snapshots syncer" Aug 5 21:35:25.056395 containerd[2007]: time="2024-08-05T21:35:25.052162477Z" level=info msg="Start cni network conf syncer for default" Aug 5 21:35:25.056395 containerd[2007]: time="2024-08-05T21:35:25.052187017Z" level=info msg="Start streaming server" Aug 5 21:35:25.056395 containerd[2007]: time="2024-08-05T21:35:25.055028101Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 21:35:25.056395 containerd[2007]: time="2024-08-05T21:35:25.055092685Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 21:35:25.056395 containerd[2007]: time="2024-08-05T21:35:25.055127317Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 21:35:25.059421 containerd[2007]: time="2024-08-05T21:35:25.059224093Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 21:35:25.059421 containerd[2007]: time="2024-08-05T21:35:25.059339545Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 21:35:25.062574 containerd[2007]: time="2024-08-05T21:35:25.059689117Z" level=info msg="containerd successfully booted in 0.418022s" Aug 5 21:35:25.060185 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 21:35:25.091256 amazon-ssm-agent[2060]: 2024-08-05 21:35:24 INFO Checking if agent identity type EC2 can be assumed Aug 5 21:35:25.110440 coreos-metadata[2100]: Aug 05 21:35:25.109 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 21:35:25.118027 coreos-metadata[2100]: Aug 05 21:35:25.115 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 5 21:35:25.121690 coreos-metadata[2100]: Aug 05 21:35:25.120 INFO Fetch successful Aug 5 21:35:25.121690 coreos-metadata[2100]: Aug 05 21:35:25.120 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 5 21:35:25.128011 coreos-metadata[2100]: Aug 05 21:35:25.124 INFO Fetch successful Aug 5 21:35:25.149230 unknown[2100]: wrote ssh authorized keys file for user: core Aug 5 21:35:25.241041 amazon-ssm-agent[2060]: 2024-08-05 21:35:25 INFO Agent will take identity from EC2 Aug 5 21:35:25.292090 update-ssh-keys[2191]: Updated "/home/core/.ssh/authorized_keys" Aug 5 21:35:25.297456 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 5 21:35:25.319798 systemd[1]: Finished sshkeys.service. Aug 5 21:35:25.353226 amazon-ssm-agent[2060]: 2024-08-05 21:35:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 21:35:25.453952 amazon-ssm-agent[2060]: 2024-08-05 21:35:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 21:35:25.551777 amazon-ssm-agent[2060]: 2024-08-05 21:35:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 21:35:25.650942 amazon-ssm-agent[2060]: 2024-08-05 21:35:25 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 5 21:35:25.753375 amazon-ssm-agent[2060]: 2024-08-05 21:35:25 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Aug 5 21:35:25.852363 amazon-ssm-agent[2060]: 2024-08-05 21:35:25 INFO [amazon-ssm-agent] Starting Core Agent Aug 5 21:35:25.951958 amazon-ssm-agent[2060]: 2024-08-05 21:35:25 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 5 21:35:26.056535 amazon-ssm-agent[2060]: 2024-08-05 21:35:25 INFO [Registrar] Starting registrar module Aug 5 21:35:26.078355 tar[2001]: linux-arm64/LICENSE Aug 5 21:35:26.078355 tar[2001]: linux-arm64/README.md Aug 5 21:35:26.084096 amazon-ssm-agent[2060]: 2024-08-05 21:35:25 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 5 21:35:26.084096 amazon-ssm-agent[2060]: 2024-08-05 21:35:26 INFO [EC2Identity] EC2 registration was successful. Aug 5 21:35:26.084096 amazon-ssm-agent[2060]: 2024-08-05 21:35:26 INFO [CredentialRefresher] credentialRefresher has started Aug 5 21:35:26.084096 amazon-ssm-agent[2060]: 2024-08-05 21:35:26 INFO [CredentialRefresher] Starting credentials refresher loop Aug 5 21:35:26.084096 amazon-ssm-agent[2060]: 2024-08-05 21:35:26 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 5 21:35:26.123597 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 21:35:26.161065 amazon-ssm-agent[2060]: 2024-08-05 21:35:26 INFO [CredentialRefresher] Next credential rotation will be in 30.9999674632 minutes Aug 5 21:35:26.263187 sshd_keygen[2029]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 21:35:26.314047 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 21:35:26.330599 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 21:35:26.337248 systemd[1]: Started sshd@0-172.31.17.56:22-139.178.68.195:34704.service - OpenSSH per-connection server daemon (139.178.68.195:34704). Aug 5 21:35:26.380740 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 21:35:26.381241 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 21:35:26.396582 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 21:35:26.445508 ntpd[1987]: Listen normally on 6 eth0 [fe80::401:feff:fe4b:56e1%2]:123 Aug 5 21:35:26.447681 ntpd[1987]: 5 Aug 21:35:26 ntpd[1987]: Listen normally on 6 eth0 [fe80::401:feff:fe4b:56e1%2]:123 Aug 5 21:35:26.455079 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 21:35:26.475389 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 21:35:26.492362 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 21:35:26.495162 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 21:35:26.597584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:35:26.603633 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 21:35:26.606811 systemd[1]: Startup finished in 1.352s (kernel) + 11.250s (initrd) + 9.584s (userspace) = 22.187s. Aug 5 21:35:26.621640 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:35:26.630509 sshd[2219]: Accepted publickey for core from 139.178.68.195 port 34704 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:35:26.635112 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:35:26.664793 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 21:35:26.674743 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 21:35:26.685224 systemd-logind[1994]: New session 1 of user core. Aug 5 21:35:26.726202 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 21:35:26.742742 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 21:35:26.773149 (systemd)[2240]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:35:27.026982 systemd[2240]: Queued start job for default target default.target. Aug 5 21:35:27.037365 systemd[2240]: Created slice app.slice - User Application Slice. Aug 5 21:35:27.037739 systemd[2240]: Reached target paths.target - Paths. Aug 5 21:35:27.037823 systemd[2240]: Reached target timers.target - Timers. Aug 5 21:35:27.050255 systemd[2240]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 21:35:27.084680 systemd[2240]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 21:35:27.085187 systemd[2240]: Reached target sockets.target - Sockets. Aug 5 21:35:27.085227 systemd[2240]: Reached target basic.target - Basic System. Aug 5 21:35:27.085340 systemd[2240]: Reached target default.target - Main User Target. Aug 5 21:35:27.085408 systemd[2240]: Startup finished in 295ms. Aug 5 21:35:27.087479 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 21:35:27.102503 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 21:35:27.138227 amazon-ssm-agent[2060]: 2024-08-05 21:35:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 5 21:35:27.238650 amazon-ssm-agent[2060]: 2024-08-05 21:35:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2254) started Aug 5 21:35:27.288527 systemd[1]: Started sshd@1-172.31.17.56:22-139.178.68.195:34708.service - OpenSSH per-connection server daemon (139.178.68.195:34708). Aug 5 21:35:27.344966 amazon-ssm-agent[2060]: 2024-08-05 21:35:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 5 21:35:27.504696 sshd[2263]: Accepted publickey for core from 139.178.68.195 port 34708 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:35:27.512479 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:35:27.535352 systemd-logind[1994]: New session 2 of user core. Aug 5 21:35:27.540277 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 21:35:27.683068 kubelet[2233]: E0805 21:35:27.681690 2233 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:35:27.682263 sshd[2263]: pam_unix(sshd:session): session closed for user core Aug 5 21:35:27.691991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:35:27.692673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:35:27.693680 systemd[1]: kubelet.service: Consumed 1.477s CPU time. Aug 5 21:35:27.695186 systemd[1]: sshd@1-172.31.17.56:22-139.178.68.195:34708.service: Deactivated successfully. Aug 5 21:35:27.699475 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 21:35:27.704270 systemd-logind[1994]: Session 2 logged out. Waiting for processes to exit. Aug 5 21:35:27.723643 systemd[1]: Started sshd@2-172.31.17.56:22-139.178.68.195:34716.service - OpenSSH per-connection server daemon (139.178.68.195:34716). Aug 5 21:35:27.726169 systemd-logind[1994]: Removed session 2. Aug 5 21:35:27.918337 sshd[2274]: Accepted publickey for core from 139.178.68.195 port 34716 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:35:27.921360 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:35:27.930325 systemd-logind[1994]: New session 3 of user core. Aug 5 21:35:27.948202 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 21:35:28.068280 sshd[2274]: pam_unix(sshd:session): session closed for user core Aug 5 21:35:28.076129 systemd[1]: sshd@2-172.31.17.56:22-139.178.68.195:34716.service: Deactivated successfully. Aug 5 21:35:28.080409 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 21:35:28.081703 systemd-logind[1994]: Session 3 logged out. Waiting for processes to exit. Aug 5 21:35:28.083744 systemd-logind[1994]: Removed session 3. Aug 5 21:35:28.112494 systemd[1]: Started sshd@3-172.31.17.56:22-139.178.68.195:34724.service - OpenSSH per-connection server daemon (139.178.68.195:34724). Aug 5 21:35:28.284248 sshd[2281]: Accepted publickey for core from 139.178.68.195 port 34724 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:35:28.287702 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:35:28.297218 systemd-logind[1994]: New session 4 of user core. Aug 5 21:35:28.306363 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 21:35:28.448496 sshd[2281]: pam_unix(sshd:session): session closed for user core Aug 5 21:35:28.456862 systemd[1]: sshd@3-172.31.17.56:22-139.178.68.195:34724.service: Deactivated successfully. Aug 5 21:35:28.461428 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 21:35:28.462822 systemd-logind[1994]: Session 4 logged out. Waiting for processes to exit. Aug 5 21:35:28.465401 systemd-logind[1994]: Removed session 4. Aug 5 21:35:28.499459 systemd[1]: Started sshd@4-172.31.17.56:22-139.178.68.195:34734.service - OpenSSH per-connection server daemon (139.178.68.195:34734). Aug 5 21:35:28.669613 sshd[2288]: Accepted publickey for core from 139.178.68.195 port 34734 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:35:28.673833 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:35:28.686290 systemd-logind[1994]: New session 5 of user core. Aug 5 21:35:28.694187 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 21:35:28.815303 sudo[2291]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 21:35:28.815887 sudo[2291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:35:28.834032 sudo[2291]: pam_unix(sudo:session): session closed for user root Aug 5 21:35:28.858412 sshd[2288]: pam_unix(sshd:session): session closed for user core Aug 5 21:35:28.864693 systemd[1]: sshd@4-172.31.17.56:22-139.178.68.195:34734.service: Deactivated successfully. Aug 5 21:35:28.868767 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 21:35:28.872985 systemd-logind[1994]: Session 5 logged out. Waiting for processes to exit. Aug 5 21:35:28.875669 systemd-logind[1994]: Removed session 5. Aug 5 21:35:28.907689 systemd[1]: Started sshd@5-172.31.17.56:22-139.178.68.195:34750.service - OpenSSH per-connection server daemon (139.178.68.195:34750). Aug 5 21:35:29.081595 sshd[2296]: Accepted publickey for core from 139.178.68.195 port 34750 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:35:29.085671 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:35:29.096251 systemd-logind[1994]: New session 6 of user core. Aug 5 21:35:29.108260 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 21:35:29.222342 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 21:35:29.222993 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:35:29.230654 sudo[2300]: pam_unix(sudo:session): session closed for user root Aug 5 21:35:29.241581 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 21:35:29.243017 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:35:29.272296 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 21:35:29.277483 auditctl[2303]: No rules Aug 5 21:35:29.278334 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 21:35:29.278793 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 21:35:29.289010 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:35:29.359487 augenrules[2321]: No rules Aug 5 21:35:29.364573 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:35:29.372671 sudo[2299]: pam_unix(sudo:session): session closed for user root Aug 5 21:35:29.398664 sshd[2296]: pam_unix(sshd:session): session closed for user core Aug 5 21:35:29.405653 systemd[1]: sshd@5-172.31.17.56:22-139.178.68.195:34750.service: Deactivated successfully. Aug 5 21:35:29.410838 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 21:35:29.415703 systemd-logind[1994]: Session 6 logged out. Waiting for processes to exit. Aug 5 21:35:29.418690 systemd-logind[1994]: Removed session 6. Aug 5 21:35:29.443850 systemd[1]: Started sshd@6-172.31.17.56:22-139.178.68.195:34760.service - OpenSSH per-connection server daemon (139.178.68.195:34760). Aug 5 21:35:29.640922 sshd[2329]: Accepted publickey for core from 139.178.68.195 port 34760 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:35:29.643394 sshd[2329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:35:29.652874 systemd-logind[1994]: New session 7 of user core. Aug 5 21:35:29.661278 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 21:35:29.770839 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 21:35:29.771567 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:35:29.995686 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 21:35:30.007522 (dockerd)[2342]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 21:35:30.258516 systemd-resolved[1803]: Clock change detected. Flushing caches. Aug 5 21:35:30.328161 dockerd[2342]: time="2024-08-05T21:35:30.328026380Z" level=info msg="Starting up" Aug 5 21:35:30.366240 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport986916086-merged.mount: Deactivated successfully. Aug 5 21:35:30.870474 dockerd[2342]: time="2024-08-05T21:35:30.870015539Z" level=info msg="Loading containers: start." Aug 5 21:35:31.135928 kernel: Initializing XFRM netlink socket Aug 5 21:35:31.188729 (udev-worker)[2356]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:35:31.300812 systemd-networkd[1845]: docker0: Link UP Aug 5 21:35:31.330990 dockerd[2342]: time="2024-08-05T21:35:31.330913101Z" level=info msg="Loading containers: done." Aug 5 21:35:31.468269 dockerd[2342]: time="2024-08-05T21:35:31.467835478Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 21:35:31.468553 dockerd[2342]: time="2024-08-05T21:35:31.468507154Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 21:35:31.468806 dockerd[2342]: time="2024-08-05T21:35:31.468754894Z" level=info msg="Daemon has completed initialization" Aug 5 21:35:31.541688 dockerd[2342]: time="2024-08-05T21:35:31.540307138Z" level=info msg="API listen on /run/docker.sock" Aug 5 21:35:31.542740 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 21:35:33.238648 containerd[2007]: time="2024-08-05T21:35:33.238442746Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 5 21:35:34.095749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252094761.mount: Deactivated successfully. Aug 5 21:35:37.755174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 21:35:37.767212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:35:38.108144 containerd[2007]: time="2024-08-05T21:35:38.106432238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:38.112556 containerd[2007]: time="2024-08-05T21:35:38.112483167Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=31601516" Aug 5 21:35:38.115641 containerd[2007]: time="2024-08-05T21:35:38.115580775Z" level=info msg="ImageCreate event name:\"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:38.159592 containerd[2007]: time="2024-08-05T21:35:38.159521811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:38.163834 containerd[2007]: time="2024-08-05T21:35:38.163764291Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"31598316\" in 4.925239549s" Aug 5 21:35:38.164350 containerd[2007]: time="2024-08-05T21:35:38.164103039Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\"" Aug 5 21:35:38.228927 containerd[2007]: time="2024-08-05T21:35:38.228791667Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 5 21:35:38.385009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:35:38.402996 (kubelet)[2545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:35:38.501814 kubelet[2545]: E0805 21:35:38.501674 2545 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:35:38.510903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:35:38.511288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:35:41.917232 containerd[2007]: time="2024-08-05T21:35:41.917060169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:41.921414 containerd[2007]: time="2024-08-05T21:35:41.921099405Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=29018270" Aug 5 21:35:41.927286 containerd[2007]: time="2024-08-05T21:35:41.926978433Z" level=info msg="ImageCreate event name:\"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:41.938955 containerd[2007]: time="2024-08-05T21:35:41.938859118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:41.943351 containerd[2007]: time="2024-08-05T21:35:41.942940894Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"30505537\" in 3.714037135s" Aug 5 21:35:41.943351 containerd[2007]: time="2024-08-05T21:35:41.943012294Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\"" Aug 5 21:35:42.001882 containerd[2007]: time="2024-08-05T21:35:42.001099866Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 5 21:35:43.945868 containerd[2007]: time="2024-08-05T21:35:43.945760079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:43.949133 containerd[2007]: time="2024-08-05T21:35:43.949021404Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=15534520" Aug 5 21:35:43.951104 containerd[2007]: time="2024-08-05T21:35:43.951032148Z" level=info msg="ImageCreate event name:\"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:43.956434 containerd[2007]: time="2024-08-05T21:35:43.956191272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:43.959073 containerd[2007]: time="2024-08-05T21:35:43.958829016Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"17021805\" in 1.957601638s" Aug 5 21:35:43.959073 containerd[2007]: time="2024-08-05T21:35:43.958903092Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\"" Aug 5 21:35:44.008186 containerd[2007]: time="2024-08-05T21:35:44.008098700Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 5 21:35:46.111840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1026217592.mount: Deactivated successfully. Aug 5 21:35:46.981563 containerd[2007]: time="2024-08-05T21:35:46.981479787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:46.984554 containerd[2007]: time="2024-08-05T21:35:46.984478047Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=24977919" Aug 5 21:35:46.986952 containerd[2007]: time="2024-08-05T21:35:46.986873283Z" level=info msg="ImageCreate event name:\"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:46.991188 containerd[2007]: time="2024-08-05T21:35:46.991092339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:46.992752 containerd[2007]: time="2024-08-05T21:35:46.992468115Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"24976938\" in 2.984286591s" Aug 5 21:35:46.992752 containerd[2007]: time="2024-08-05T21:35:46.992536455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\"" Aug 5 21:35:47.038579 containerd[2007]: time="2024-08-05T21:35:47.038360459Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 21:35:47.594853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1912268180.mount: Deactivated successfully. Aug 5 21:35:47.611017 containerd[2007]: time="2024-08-05T21:35:47.610275638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:47.612996 containerd[2007]: time="2024-08-05T21:35:47.612771458Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Aug 5 21:35:47.614006 containerd[2007]: time="2024-08-05T21:35:47.613911374Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:47.623474 containerd[2007]: time="2024-08-05T21:35:47.623227886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:47.626482 containerd[2007]: time="2024-08-05T21:35:47.625876694Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 587.372523ms" Aug 5 21:35:47.626482 containerd[2007]: time="2024-08-05T21:35:47.625949474Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Aug 5 21:35:47.681696 containerd[2007]: time="2024-08-05T21:35:47.681592034Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 21:35:48.399411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558833030.mount: Deactivated successfully. Aug 5 21:35:48.762163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 21:35:48.774116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:35:49.707769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:35:49.724545 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:35:49.886938 kubelet[2626]: E0805 21:35:49.886848 2626 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:35:49.891751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:35:49.892079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:35:53.146110 containerd[2007]: time="2024-08-05T21:35:53.145472021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:53.148856 containerd[2007]: time="2024-08-05T21:35:53.148701893Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Aug 5 21:35:53.149965 containerd[2007]: time="2024-08-05T21:35:53.149802917Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:53.160114 containerd[2007]: time="2024-08-05T21:35:53.159938753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:53.164068 containerd[2007]: time="2024-08-05T21:35:53.163905665Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 5.482206663s" Aug 5 21:35:53.164068 containerd[2007]: time="2024-08-05T21:35:53.164030321Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Aug 5 21:35:53.220699 containerd[2007]: time="2024-08-05T21:35:53.220619514Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 5 21:35:53.874240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095711969.mount: Deactivated successfully. Aug 5 21:35:54.685392 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 5 21:35:54.833021 containerd[2007]: time="2024-08-05T21:35:54.832838998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:54.837440 containerd[2007]: time="2024-08-05T21:35:54.837313942Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Aug 5 21:35:54.840090 containerd[2007]: time="2024-08-05T21:35:54.839947786Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:54.846810 containerd[2007]: time="2024-08-05T21:35:54.845718226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:35:54.848169 containerd[2007]: time="2024-08-05T21:35:54.848070730Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.6269368s" Aug 5 21:35:54.848169 containerd[2007]: time="2024-08-05T21:35:54.848154970Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Aug 5 21:36:00.111441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 21:36:00.123059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:01.466715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:01.481274 (kubelet)[2736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:36:01.490729 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:01.492540 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 21:36:01.492963 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:01.505128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:01.571291 systemd[1]: Reloading requested from client PID 2746 ('systemctl') (unit session-7.scope)... Aug 5 21:36:01.571321 systemd[1]: Reloading... Aug 5 21:36:01.797495 zram_generator::config[2788]: No configuration found. Aug 5 21:36:02.075147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:36:02.272449 systemd[1]: Reloading finished in 700 ms. Aug 5 21:36:02.392915 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 21:36:02.393141 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 21:36:02.394555 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:02.408241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:02.938925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:02.954139 (kubelet)[2844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:36:03.066415 kubelet[2844]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:36:03.066415 kubelet[2844]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:36:03.066415 kubelet[2844]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:36:03.066415 kubelet[2844]: I0805 21:36:03.065237 2844 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:36:04.403103 kubelet[2844]: I0805 21:36:04.403023 2844 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 21:36:04.404020 kubelet[2844]: I0805 21:36:04.403511 2844 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:36:04.404828 kubelet[2844]: I0805 21:36:04.404604 2844 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 21:36:04.440194 kubelet[2844]: I0805 21:36:04.439908 2844 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:36:04.441090 kubelet[2844]: E0805 21:36:04.441021 2844 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:04.461757 kubelet[2844]: W0805 21:36:04.461634 2844 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 21:36:04.463304 kubelet[2844]: I0805 21:36:04.463242 2844 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:36:04.464106 kubelet[2844]: I0805 21:36:04.464005 2844 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:36:04.464734 kubelet[2844]: I0805 21:36:04.464621 2844 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:36:04.465077 kubelet[2844]: I0805 21:36:04.464767 2844 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:36:04.465077 kubelet[2844]: I0805 21:36:04.464796 2844 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:36:04.465228 kubelet[2844]: I0805 21:36:04.465127 2844 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:36:04.468692 kubelet[2844]: I0805 21:36:04.468618 2844 kubelet.go:393] "Attempting to sync node with API server" Aug 5 21:36:04.468692 kubelet[2844]: I0805 21:36:04.468689 2844 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:36:04.468880 kubelet[2844]: I0805 21:36:04.468771 2844 kubelet.go:309] "Adding apiserver pod source" Aug 5 21:36:04.468880 kubelet[2844]: I0805 21:36:04.468830 2844 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:36:04.472743 kubelet[2844]: W0805 21:36:04.472598 2844 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.17.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:04.472743 kubelet[2844]: E0805 21:36:04.472757 2844 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:04.475000 kubelet[2844]: I0805 21:36:04.474928 2844 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:36:04.479772 kubelet[2844]: W0805 21:36:04.479654 2844 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 21:36:04.484476 kubelet[2844]: W0805 21:36:04.484294 2844 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.17.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-56&limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:04.484816 kubelet[2844]: E0805 21:36:04.484793 2844 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-56&limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:04.486283 kubelet[2844]: I0805 21:36:04.486217 2844 server.go:1232] "Started kubelet" Aug 5 21:36:04.490984 kubelet[2844]: I0805 21:36:04.490858 2844 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:36:04.492588 kubelet[2844]: I0805 21:36:04.492524 2844 server.go:462] "Adding debug handlers to kubelet server" Aug 5 21:36:04.494434 kubelet[2844]: I0805 21:36:04.493285 2844 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 21:36:04.494434 kubelet[2844]: I0805 21:36:04.493921 2844 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:36:04.496349 kubelet[2844]: E0805 21:36:04.496312 2844 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 21:36:04.496565 kubelet[2844]: E0805 21:36:04.496544 2844 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:36:04.496780 kubelet[2844]: I0805 21:36:04.496709 2844 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:36:04.499079 kubelet[2844]: E0805 21:36:04.498931 2844 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-17-56.17e8f2c021eeef92", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-17-56", UID:"ip-172-31-17-56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-17-56"}, FirstTimestamp:time.Date(2024, time.August, 5, 21, 36, 4, 486164370, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 21, 36, 4, 486164370, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-17-56"}': 'Post "https://172.31.17.56:6443/api/v1/namespaces/default/events": dial tcp 172.31.17.56:6443: connect: connection refused'(may retry after sleeping) Aug 5 21:36:04.510032 kubelet[2844]: E0805 21:36:04.509958 2844 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-17-56\" not found" Aug 5 21:36:04.510695 kubelet[2844]: I0805 21:36:04.510659 2844 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:36:04.511068 kubelet[2844]: I0805 21:36:04.511033 2844 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:36:04.511521 kubelet[2844]: I0805 21:36:04.511462 2844 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:36:04.513463 kubelet[2844]: W0805 21:36:04.513328 2844 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.17.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:04.513753 kubelet[2844]: E0805 21:36:04.513714 2844 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:04.513884 kubelet[2844]: E0805 21:36:04.513765 2844 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-56?timeout=10s\": dial tcp 172.31.17.56:6443: connect: connection refused" interval="200ms" Aug 5 21:36:04.548978 kubelet[2844]: I0805 21:36:04.548635 2844 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:36:04.554518 kubelet[2844]: I0805 21:36:04.554263 2844 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:36:04.554518 kubelet[2844]: I0805 21:36:04.554309 2844 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:36:04.554518 kubelet[2844]: I0805 21:36:04.554341 2844 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 21:36:04.555092 kubelet[2844]: E0805 21:36:04.554823 2844 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:36:04.557615 kubelet[2844]: W0805 21:36:04.557550 2844 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.17.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:04.559832 kubelet[2844]: E0805 21:36:04.559504 2844 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:04.595439 kubelet[2844]: I0805 21:36:04.595358 2844 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:36:04.596143 kubelet[2844]: I0805 21:36:04.595633 2844 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:36:04.596143 kubelet[2844]: I0805 21:36:04.595670 2844 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:36:04.602837 kubelet[2844]: I0805 21:36:04.602542 2844 policy_none.go:49] "None policy: Start" Aug 5 21:36:04.604512 kubelet[2844]: I0805 21:36:04.604420 2844 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 21:36:04.604512 kubelet[2844]: I0805 21:36:04.604517 2844 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:36:04.615443 kubelet[2844]: I0805 21:36:04.615170 2844 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-56" Aug 5 21:36:04.616479 kubelet[2844]: E0805 21:36:04.616449 2844 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.17.56:6443/api/v1/nodes\": dial tcp 172.31.17.56:6443: connect: connection refused" node="ip-172-31-17-56" Aug 5 21:36:04.623121 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 21:36:04.646588 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 21:36:04.656964 kubelet[2844]: E0805 21:36:04.655349 2844 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 21:36:04.663300 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 21:36:04.666757 kubelet[2844]: I0805 21:36:04.666017 2844 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:36:04.666757 kubelet[2844]: I0805 21:36:04.666497 2844 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:36:04.668779 kubelet[2844]: E0805 21:36:04.667781 2844 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-56\" not found" Aug 5 21:36:04.715052 kubelet[2844]: E0805 21:36:04.714877 2844 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-56?timeout=10s\": dial tcp 172.31.17.56:6443: connect: connection refused" interval="400ms" Aug 5 21:36:04.820861 kubelet[2844]: I0805 21:36:04.820415 2844 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-56" Aug 5 21:36:04.821611 kubelet[2844]: E0805 21:36:04.821547 2844 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.17.56:6443/api/v1/nodes\": dial tcp 172.31.17.56:6443: connect: connection refused" node="ip-172-31-17-56" Aug 5 21:36:04.856769 kubelet[2844]: I0805 21:36:04.856671 2844 topology_manager.go:215] "Topology Admit Handler" podUID="02673da5f83a3bb8ed1170d65f46d01d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-56" Aug 5 21:36:04.860415 kubelet[2844]: I0805 21:36:04.860062 2844 topology_manager.go:215] "Topology Admit Handler" podUID="97bc4dd8b31448e3b8ffd5aea0b13658" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:04.864611 kubelet[2844]: I0805 21:36:04.864353 2844 topology_manager.go:215] "Topology Admit Handler" podUID="629927ac92bf6cb2380cafd1b8e2b037" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-56" Aug 5 21:36:04.885881 systemd[1]: Created slice kubepods-burstable-pod02673da5f83a3bb8ed1170d65f46d01d.slice - libcontainer container kubepods-burstable-pod02673da5f83a3bb8ed1170d65f46d01d.slice. Aug 5 21:36:04.905176 systemd[1]: Created slice kubepods-burstable-pod97bc4dd8b31448e3b8ffd5aea0b13658.slice - libcontainer container kubepods-burstable-pod97bc4dd8b31448e3b8ffd5aea0b13658.slice. Aug 5 21:36:04.915189 kubelet[2844]: I0805 21:36:04.914808 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97bc4dd8b31448e3b8ffd5aea0b13658-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-56\" (UID: \"97bc4dd8b31448e3b8ffd5aea0b13658\") " pod="kube-system/kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:04.915189 kubelet[2844]: I0805 21:36:04.914902 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97bc4dd8b31448e3b8ffd5aea0b13658-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-56\" (UID: \"97bc4dd8b31448e3b8ffd5aea0b13658\") " pod="kube-system/kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:04.915189 kubelet[2844]: I0805 21:36:04.914957 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97bc4dd8b31448e3b8ffd5aea0b13658-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-56\" (UID: \"97bc4dd8b31448e3b8ffd5aea0b13658\") " pod="kube-system/kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:04.915189 kubelet[2844]: I0805 21:36:04.915018 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/629927ac92bf6cb2380cafd1b8e2b037-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-56\" (UID: \"629927ac92bf6cb2380cafd1b8e2b037\") " pod="kube-system/kube-scheduler-ip-172-31-17-56" Aug 5 21:36:04.915189 kubelet[2844]: I0805 21:36:04.915156 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02673da5f83a3bb8ed1170d65f46d01d-ca-certs\") pod \"kube-apiserver-ip-172-31-17-56\" (UID: \"02673da5f83a3bb8ed1170d65f46d01d\") " pod="kube-system/kube-apiserver-ip-172-31-17-56" Aug 5 21:36:04.916136 kubelet[2844]: I0805 21:36:04.915210 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02673da5f83a3bb8ed1170d65f46d01d-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-56\" (UID: \"02673da5f83a3bb8ed1170d65f46d01d\") " pod="kube-system/kube-apiserver-ip-172-31-17-56" Aug 5 21:36:04.916997 kubelet[2844]: I0805 21:36:04.916279 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02673da5f83a3bb8ed1170d65f46d01d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-56\" (UID: \"02673da5f83a3bb8ed1170d65f46d01d\") " pod="kube-system/kube-apiserver-ip-172-31-17-56" Aug 5 21:36:04.916997 kubelet[2844]: I0805 21:36:04.916436 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97bc4dd8b31448e3b8ffd5aea0b13658-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-56\" (UID: \"97bc4dd8b31448e3b8ffd5aea0b13658\") " pod="kube-system/kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:04.916997 kubelet[2844]: I0805 21:36:04.916517 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97bc4dd8b31448e3b8ffd5aea0b13658-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-56\" (UID: \"97bc4dd8b31448e3b8ffd5aea0b13658\") " pod="kube-system/kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:04.928951 systemd[1]: Created slice kubepods-burstable-pod629927ac92bf6cb2380cafd1b8e2b037.slice - libcontainer container kubepods-burstable-pod629927ac92bf6cb2380cafd1b8e2b037.slice. Aug 5 21:36:05.116992 kubelet[2844]: E0805 21:36:05.116922 2844 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-56?timeout=10s\": dial tcp 172.31.17.56:6443: connect: connection refused" interval="800ms" Aug 5 21:36:05.200291 containerd[2007]: time="2024-08-05T21:36:05.199971125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-56,Uid:02673da5f83a3bb8ed1170d65f46d01d,Namespace:kube-system,Attempt:0,}" Aug 5 21:36:05.225652 containerd[2007]: time="2024-08-05T21:36:05.224770481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-56,Uid:97bc4dd8b31448e3b8ffd5aea0b13658,Namespace:kube-system,Attempt:0,}" Aug 5 21:36:05.227562 kubelet[2844]: I0805 21:36:05.227393 2844 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-56" Aug 5 21:36:05.229606 kubelet[2844]: E0805 21:36:05.229444 2844 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.17.56:6443/api/v1/nodes\": dial tcp 172.31.17.56:6443: connect: connection refused" node="ip-172-31-17-56" Aug 5 21:36:05.240284 containerd[2007]: time="2024-08-05T21:36:05.240178241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-56,Uid:629927ac92bf6cb2380cafd1b8e2b037,Namespace:kube-system,Attempt:0,}" Aug 5 21:36:05.321709 kubelet[2844]: W0805 21:36:05.321496 2844 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.17.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-56&limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:05.321709 kubelet[2844]: E0805 21:36:05.321635 2844 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-56&limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:05.379586 kubelet[2844]: W0805 21:36:05.379460 2844 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.17.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:05.379586 kubelet[2844]: E0805 21:36:05.379555 2844 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.56:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:05.443958 kubelet[2844]: W0805 21:36:05.443770 2844 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.17.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:05.443958 kubelet[2844]: E0805 21:36:05.443904 2844 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:05.702608 kubelet[2844]: W0805 21:36:05.702460 2844 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.17.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:05.702608 kubelet[2844]: E0805 21:36:05.702564 2844 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:05.768173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800953559.mount: Deactivated successfully. Aug 5 21:36:05.786855 containerd[2007]: time="2024-08-05T21:36:05.786760328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:36:05.788992 containerd[2007]: time="2024-08-05T21:36:05.788890256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Aug 5 21:36:05.791005 containerd[2007]: time="2024-08-05T21:36:05.790851092Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:36:05.793085 containerd[2007]: time="2024-08-05T21:36:05.792956960Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:36:05.795112 containerd[2007]: time="2024-08-05T21:36:05.794934884Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:36:05.797292 containerd[2007]: time="2024-08-05T21:36:05.797070872Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:36:05.799265 containerd[2007]: time="2024-08-05T21:36:05.799124036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:36:05.809203 containerd[2007]: time="2024-08-05T21:36:05.808839992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:36:05.814618 containerd[2007]: time="2024-08-05T21:36:05.813842192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 588.915663ms" Aug 5 21:36:05.820037 containerd[2007]: time="2024-08-05T21:36:05.819912836Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 619.751295ms" Aug 5 21:36:05.820583 containerd[2007]: time="2024-08-05T21:36:05.820462424Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 580.139283ms" Aug 5 21:36:05.918810 kubelet[2844]: E0805 21:36:05.918714 2844 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-56?timeout=10s\": dial tcp 172.31.17.56:6443: connect: connection refused" interval="1.6s" Aug 5 21:36:06.035492 kubelet[2844]: I0805 21:36:06.034816 2844 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-56" Aug 5 21:36:06.036709 kubelet[2844]: E0805 21:36:06.036582 2844 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.17.56:6443/api/v1/nodes\": dial tcp 172.31.17.56:6443: connect: connection refused" node="ip-172-31-17-56" Aug 5 21:36:06.099576 containerd[2007]: time="2024-08-05T21:36:06.099039798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:36:06.099576 containerd[2007]: time="2024-08-05T21:36:06.099197022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:06.099576 containerd[2007]: time="2024-08-05T21:36:06.099230682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:36:06.099576 containerd[2007]: time="2024-08-05T21:36:06.099273438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:06.103944 containerd[2007]: time="2024-08-05T21:36:06.102041526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:36:06.104181 containerd[2007]: time="2024-08-05T21:36:06.103695294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:06.104181 containerd[2007]: time="2024-08-05T21:36:06.103801782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:36:06.104181 containerd[2007]: time="2024-08-05T21:36:06.103841358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:06.106437 containerd[2007]: time="2024-08-05T21:36:06.105481134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:36:06.106437 containerd[2007]: time="2024-08-05T21:36:06.105692922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:06.106437 containerd[2007]: time="2024-08-05T21:36:06.105727350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:36:06.106437 containerd[2007]: time="2024-08-05T21:36:06.105753918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:06.176886 systemd[1]: Started cri-containerd-15138d4cb52d00f068b38cbd585a37971394c8381baad3e16c365b27b49c7da6.scope - libcontainer container 15138d4cb52d00f068b38cbd585a37971394c8381baad3e16c365b27b49c7da6. Aug 5 21:36:06.209879 systemd[1]: Started cri-containerd-ec5dc293df62715c0e55a60395495f2d365cc09b4c8947ba8794846204704410.scope - libcontainer container ec5dc293df62715c0e55a60395495f2d365cc09b4c8947ba8794846204704410. Aug 5 21:36:06.221564 systemd[1]: Started cri-containerd-ffcab1c8c194914d0743aa78f285723e19d380c02323bc4425aaa9a23be6c8b1.scope - libcontainer container ffcab1c8c194914d0743aa78f285723e19d380c02323bc4425aaa9a23be6c8b1. Aug 5 21:36:06.339469 containerd[2007]: time="2024-08-05T21:36:06.338797123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-56,Uid:02673da5f83a3bb8ed1170d65f46d01d,Namespace:kube-system,Attempt:0,} returns sandbox id \"15138d4cb52d00f068b38cbd585a37971394c8381baad3e16c365b27b49c7da6\"" Aug 5 21:36:06.354821 containerd[2007]: time="2024-08-05T21:36:06.354707527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-56,Uid:97bc4dd8b31448e3b8ffd5aea0b13658,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec5dc293df62715c0e55a60395495f2d365cc09b4c8947ba8794846204704410\"" Aug 5 21:36:06.361214 containerd[2007]: time="2024-08-05T21:36:06.360978835Z" level=info msg="CreateContainer within sandbox \"15138d4cb52d00f068b38cbd585a37971394c8381baad3e16c365b27b49c7da6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 21:36:06.367672 containerd[2007]: time="2024-08-05T21:36:06.367434139Z" level=info msg="CreateContainer within sandbox \"ec5dc293df62715c0e55a60395495f2d365cc09b4c8947ba8794846204704410\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 21:36:06.392122 containerd[2007]: time="2024-08-05T21:36:06.392024923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-56,Uid:629927ac92bf6cb2380cafd1b8e2b037,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffcab1c8c194914d0743aa78f285723e19d380c02323bc4425aaa9a23be6c8b1\"" Aug 5 21:36:06.403414 containerd[2007]: time="2024-08-05T21:36:06.403319695Z" level=info msg="CreateContainer within sandbox \"ffcab1c8c194914d0743aa78f285723e19d380c02323bc4425aaa9a23be6c8b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 21:36:06.406804 containerd[2007]: time="2024-08-05T21:36:06.406694887Z" level=info msg="CreateContainer within sandbox \"ec5dc293df62715c0e55a60395495f2d365cc09b4c8947ba8794846204704410\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4994fb92994ee490b3842c252570a30b280d0f5ada553ebd743845a5735d3434\"" Aug 5 21:36:06.409247 containerd[2007]: time="2024-08-05T21:36:06.408801895Z" level=info msg="StartContainer for \"4994fb92994ee490b3842c252570a30b280d0f5ada553ebd743845a5735d3434\"" Aug 5 21:36:06.426781 containerd[2007]: time="2024-08-05T21:36:06.426719035Z" level=info msg="CreateContainer within sandbox \"15138d4cb52d00f068b38cbd585a37971394c8381baad3e16c365b27b49c7da6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9b5fb9c3d245ab528f263dfb477aa9bab2c4ec3f46f865e369d5733783beeae2\"" Aug 5 21:36:06.428033 containerd[2007]: time="2024-08-05T21:36:06.427793659Z" level=info msg="StartContainer for \"9b5fb9c3d245ab528f263dfb477aa9bab2c4ec3f46f865e369d5733783beeae2\"" Aug 5 21:36:06.448887 containerd[2007]: time="2024-08-05T21:36:06.448499011Z" level=info msg="CreateContainer within sandbox \"ffcab1c8c194914d0743aa78f285723e19d380c02323bc4425aaa9a23be6c8b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b63ba31cf776c30a6e93b13f6e2c405e08bc4a6fcf83e5d89931904c35ed8b66\"" Aug 5 21:36:06.451473 containerd[2007]: time="2024-08-05T21:36:06.450089455Z" level=info msg="StartContainer for \"b63ba31cf776c30a6e93b13f6e2c405e08bc4a6fcf83e5d89931904c35ed8b66\"" Aug 5 21:36:06.499997 systemd[1]: Started cri-containerd-4994fb92994ee490b3842c252570a30b280d0f5ada553ebd743845a5735d3434.scope - libcontainer container 4994fb92994ee490b3842c252570a30b280d0f5ada553ebd743845a5735d3434. Aug 5 21:36:06.517527 systemd[1]: Started cri-containerd-9b5fb9c3d245ab528f263dfb477aa9bab2c4ec3f46f865e369d5733783beeae2.scope - libcontainer container 9b5fb9c3d245ab528f263dfb477aa9bab2c4ec3f46f865e369d5733783beeae2. Aug 5 21:36:06.584433 systemd[1]: Started cri-containerd-b63ba31cf776c30a6e93b13f6e2c405e08bc4a6fcf83e5d89931904c35ed8b66.scope - libcontainer container b63ba31cf776c30a6e93b13f6e2c405e08bc4a6fcf83e5d89931904c35ed8b66. Aug 5 21:36:06.595078 kubelet[2844]: E0805 21:36:06.594679 2844 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.56:6443: connect: connection refused Aug 5 21:36:06.718171 containerd[2007]: time="2024-08-05T21:36:06.716240493Z" level=info msg="StartContainer for \"4994fb92994ee490b3842c252570a30b280d0f5ada553ebd743845a5735d3434\" returns successfully" Aug 5 21:36:06.743863 containerd[2007]: time="2024-08-05T21:36:06.743198793Z" level=info msg="StartContainer for \"9b5fb9c3d245ab528f263dfb477aa9bab2c4ec3f46f865e369d5733783beeae2\" returns successfully" Aug 5 21:36:06.786954 containerd[2007]: time="2024-08-05T21:36:06.786788001Z" level=info msg="StartContainer for \"b63ba31cf776c30a6e93b13f6e2c405e08bc4a6fcf83e5d89931904c35ed8b66\" returns successfully" Aug 5 21:36:07.644420 kubelet[2844]: I0805 21:36:07.643623 2844 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-56" Aug 5 21:36:09.234838 update_engine[1995]: I0805 21:36:09.233464 1995 update_attempter.cc:509] Updating boot flags... Aug 5 21:36:09.429487 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3133) Aug 5 21:36:10.092534 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3136) Aug 5 21:36:10.660518 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3136) Aug 5 21:36:11.833738 kubelet[2844]: E0805 21:36:11.833608 2844 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-56\" not found" node="ip-172-31-17-56" Aug 5 21:36:11.974505 kubelet[2844]: I0805 21:36:11.974274 2844 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-17-56" Aug 5 21:36:12.011294 kubelet[2844]: E0805 21:36:12.011110 2844 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-17-56.17e8f2c021eeef92", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-17-56", UID:"ip-172-31-17-56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-17-56"}, FirstTimestamp:time.Date(2024, time.August, 5, 21, 36, 4, 486164370, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 21, 36, 4, 486164370, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-17-56"}': 'namespaces "default" not found' (will not retry!) Aug 5 21:36:12.094547 kubelet[2844]: E0805 21:36:12.093721 2844 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-17-56.17e8f2c0228cf7b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-17-56", UID:"ip-172-31-17-56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-17-56"}, FirstTimestamp:time.Date(2024, time.August, 5, 21, 36, 4, 496521138, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 21, 36, 4, 496521138, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-17-56"}': 'namespaces "default" not found' (will not retry!) Aug 5 21:36:12.477921 kubelet[2844]: I0805 21:36:12.477499 2844 apiserver.go:52] "Watching apiserver" Aug 5 21:36:12.512481 kubelet[2844]: I0805 21:36:12.512248 2844 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:36:14.593189 kubelet[2844]: I0805 21:36:14.593031 2844 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-56" podStartSLOduration=1.592850704 podCreationTimestamp="2024-08-05 21:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:36:14.592333396 +0000 UTC m=+11.627202443" watchObservedRunningTime="2024-08-05 21:36:14.592850704 +0000 UTC m=+11.627719751" Aug 5 21:36:15.176166 systemd[1]: Reloading requested from client PID 3387 ('systemctl') (unit session-7.scope)... Aug 5 21:36:15.176193 systemd[1]: Reloading... Aug 5 21:36:15.406479 zram_generator::config[3425]: No configuration found. Aug 5 21:36:15.810619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:36:16.034998 systemd[1]: Reloading finished in 857 ms. Aug 5 21:36:16.147186 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:16.149139 kubelet[2844]: I0805 21:36:16.149090 2844 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:36:16.168425 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 21:36:16.170172 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:16.170336 systemd[1]: kubelet.service: Consumed 2.649s CPU time, 115.2M memory peak, 0B memory swap peak. Aug 5 21:36:16.185972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:16.621877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:16.639998 (kubelet)[3485]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:36:16.811182 kubelet[3485]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:36:16.811182 kubelet[3485]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:36:16.811182 kubelet[3485]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:36:16.812121 kubelet[3485]: I0805 21:36:16.811326 3485 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:36:16.832461 kubelet[3485]: I0805 21:36:16.831537 3485 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 21:36:16.832461 kubelet[3485]: I0805 21:36:16.831612 3485 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:36:16.832461 kubelet[3485]: I0805 21:36:16.832256 3485 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 21:36:16.842171 kubelet[3485]: I0805 21:36:16.842102 3485 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 21:36:16.846764 kubelet[3485]: I0805 21:36:16.845483 3485 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:36:16.871676 kubelet[3485]: W0805 21:36:16.871615 3485 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 21:36:16.874152 kubelet[3485]: I0805 21:36:16.873892 3485 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:36:16.877303 kubelet[3485]: I0805 21:36:16.874793 3485 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:36:16.877303 kubelet[3485]: I0805 21:36:16.875240 3485 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:36:16.877303 kubelet[3485]: I0805 21:36:16.875329 3485 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:36:16.877303 kubelet[3485]: I0805 21:36:16.875351 3485 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:36:16.877303 kubelet[3485]: I0805 21:36:16.876166 3485 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:36:16.877303 kubelet[3485]: I0805 21:36:16.876552 3485 kubelet.go:393] "Attempting to sync node with API server" Aug 5 21:36:16.877931 kubelet[3485]: I0805 21:36:16.876599 3485 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:36:16.877931 kubelet[3485]: I0805 21:36:16.876646 3485 kubelet.go:309] "Adding apiserver pod source" Aug 5 21:36:16.877931 kubelet[3485]: I0805 21:36:16.876703 3485 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:36:16.888209 kubelet[3485]: I0805 21:36:16.884730 3485 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:36:16.888209 kubelet[3485]: I0805 21:36:16.886152 3485 server.go:1232] "Started kubelet" Aug 5 21:36:16.894762 kubelet[3485]: I0805 21:36:16.894194 3485 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:36:16.906774 kubelet[3485]: I0805 21:36:16.905639 3485 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:36:16.911137 kubelet[3485]: I0805 21:36:16.909915 3485 server.go:462] "Adding debug handlers to kubelet server" Aug 5 21:36:16.916467 kubelet[3485]: I0805 21:36:16.916278 3485 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 21:36:16.918947 kubelet[3485]: I0805 21:36:16.918741 3485 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:36:16.929866 kubelet[3485]: I0805 21:36:16.929734 3485 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:36:16.931332 kubelet[3485]: I0805 21:36:16.930954 3485 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:36:16.933565 kubelet[3485]: E0805 21:36:16.931763 3485 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 21:36:16.933565 kubelet[3485]: E0805 21:36:16.931877 3485 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:36:16.933565 kubelet[3485]: I0805 21:36:16.933256 3485 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:36:17.026407 kubelet[3485]: I0805 21:36:17.026301 3485 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:36:17.057079 kubelet[3485]: I0805 21:36:17.054641 3485 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:36:17.057079 kubelet[3485]: I0805 21:36:17.054922 3485 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:36:17.057435 kubelet[3485]: I0805 21:36:17.057404 3485 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 21:36:17.062625 kubelet[3485]: E0805 21:36:17.061153 3485 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:36:17.082982 kubelet[3485]: E0805 21:36:17.082890 3485 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Aug 5 21:36:17.109559 kubelet[3485]: I0805 21:36:17.109446 3485 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-17-56" Aug 5 21:36:17.149728 kubelet[3485]: I0805 21:36:17.147334 3485 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-17-56" Aug 5 21:36:17.151809 kubelet[3485]: I0805 21:36:17.151715 3485 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-17-56" Aug 5 21:36:17.162295 kubelet[3485]: E0805 21:36:17.161491 3485 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 21:36:17.296031 kubelet[3485]: I0805 21:36:17.295346 3485 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:36:17.296031 kubelet[3485]: I0805 21:36:17.295432 3485 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:36:17.296031 kubelet[3485]: I0805 21:36:17.295468 3485 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:36:17.296031 kubelet[3485]: I0805 21:36:17.295782 3485 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 21:36:17.296031 kubelet[3485]: I0805 21:36:17.295820 3485 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 21:36:17.296031 kubelet[3485]: I0805 21:36:17.295838 3485 policy_none.go:49] "None policy: Start" Aug 5 21:36:17.299998 kubelet[3485]: I0805 21:36:17.299315 3485 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 21:36:17.299998 kubelet[3485]: I0805 21:36:17.299423 3485 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:36:17.299998 kubelet[3485]: I0805 21:36:17.299810 3485 state_mem.go:75] "Updated machine memory state" Aug 5 21:36:17.314011 kubelet[3485]: I0805 21:36:17.312523 3485 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:36:17.314907 kubelet[3485]: I0805 21:36:17.314715 3485 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:36:17.362479 kubelet[3485]: I0805 21:36:17.362399 3485 topology_manager.go:215] "Topology Admit Handler" podUID="02673da5f83a3bb8ed1170d65f46d01d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-56" Aug 5 21:36:17.363139 kubelet[3485]: I0805 21:36:17.362687 3485 topology_manager.go:215] "Topology Admit Handler" podUID="97bc4dd8b31448e3b8ffd5aea0b13658" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:17.363139 kubelet[3485]: I0805 21:36:17.362804 3485 topology_manager.go:215] "Topology Admit Handler" podUID="629927ac92bf6cb2380cafd1b8e2b037" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-56" Aug 5 21:36:17.376880 kubelet[3485]: E0805 21:36:17.376774 3485 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-56\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-56" Aug 5 21:36:17.378664 kubelet[3485]: E0805 21:36:17.378611 3485 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-17-56\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:17.447074 kubelet[3485]: I0805 21:36:17.446769 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97bc4dd8b31448e3b8ffd5aea0b13658-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-56\" (UID: \"97bc4dd8b31448e3b8ffd5aea0b13658\") " pod="kube-system/kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:17.447074 kubelet[3485]: I0805 21:36:17.446974 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97bc4dd8b31448e3b8ffd5aea0b13658-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-56\" (UID: \"97bc4dd8b31448e3b8ffd5aea0b13658\") " pod="kube-system/kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:17.447074 kubelet[3485]: I0805 21:36:17.447160 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/629927ac92bf6cb2380cafd1b8e2b037-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-56\" (UID: \"629927ac92bf6cb2380cafd1b8e2b037\") " pod="kube-system/kube-scheduler-ip-172-31-17-56" Aug 5 21:36:17.447074 kubelet[3485]: I0805 21:36:17.447228 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02673da5f83a3bb8ed1170d65f46d01d-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-56\" (UID: \"02673da5f83a3bb8ed1170d65f46d01d\") " pod="kube-system/kube-apiserver-ip-172-31-17-56" Aug 5 21:36:17.447074 kubelet[3485]: I0805 21:36:17.447334 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02673da5f83a3bb8ed1170d65f46d01d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-56\" (UID: \"02673da5f83a3bb8ed1170d65f46d01d\") " pod="kube-system/kube-apiserver-ip-172-31-17-56" Aug 5 21:36:17.448929 kubelet[3485]: I0805 21:36:17.447472 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97bc4dd8b31448e3b8ffd5aea0b13658-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-56\" (UID: \"97bc4dd8b31448e3b8ffd5aea0b13658\") " pod="kube-system/kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:17.448929 kubelet[3485]: I0805 21:36:17.447574 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97bc4dd8b31448e3b8ffd5aea0b13658-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-56\" (UID: \"97bc4dd8b31448e3b8ffd5aea0b13658\") " pod="kube-system/kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:17.448929 kubelet[3485]: I0805 21:36:17.447678 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02673da5f83a3bb8ed1170d65f46d01d-ca-certs\") pod \"kube-apiserver-ip-172-31-17-56\" (UID: \"02673da5f83a3bb8ed1170d65f46d01d\") " pod="kube-system/kube-apiserver-ip-172-31-17-56" Aug 5 21:36:17.448929 kubelet[3485]: I0805 21:36:17.447791 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97bc4dd8b31448e3b8ffd5aea0b13658-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-56\" (UID: \"97bc4dd8b31448e3b8ffd5aea0b13658\") " pod="kube-system/kube-controller-manager-ip-172-31-17-56" Aug 5 21:36:17.878979 kubelet[3485]: I0805 21:36:17.878747 3485 apiserver.go:52] "Watching apiserver" Aug 5 21:36:17.933433 kubelet[3485]: I0805 21:36:17.931283 3485 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:36:18.198610 kubelet[3485]: I0805 21:36:18.197666 3485 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-56" podStartSLOduration=1.197530254 podCreationTimestamp="2024-08-05 21:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:36:18.158234429 +0000 UTC m=+1.500877952" watchObservedRunningTime="2024-08-05 21:36:18.197530254 +0000 UTC m=+1.540173681" Aug 5 21:36:18.222533 kubelet[3485]: I0805 21:36:18.221852 3485 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-56" podStartSLOduration=3.221768262 podCreationTimestamp="2024-08-05 21:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:36:18.201150738 +0000 UTC m=+1.543794549" watchObservedRunningTime="2024-08-05 21:36:18.221768262 +0000 UTC m=+1.564412097" Aug 5 21:36:25.611834 sudo[2332]: pam_unix(sudo:session): session closed for user root Aug 5 21:36:25.637650 sshd[2329]: pam_unix(sshd:session): session closed for user core Aug 5 21:36:25.643708 systemd[1]: sshd@6-172.31.17.56:22-139.178.68.195:34760.service: Deactivated successfully. Aug 5 21:36:25.648828 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 21:36:25.649224 systemd[1]: session-7.scope: Consumed 10.935s CPU time, 132.2M memory peak, 0B memory swap peak. Aug 5 21:36:25.651944 systemd-logind[1994]: Session 7 logged out. Waiting for processes to exit. Aug 5 21:36:25.655084 systemd-logind[1994]: Removed session 7. Aug 5 21:36:28.566250 kubelet[3485]: I0805 21:36:28.565677 3485 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 21:36:28.567238 containerd[2007]: time="2024-08-05T21:36:28.566979905Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 21:36:28.568074 kubelet[3485]: I0805 21:36:28.567807 3485 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 21:36:29.303217 kubelet[3485]: I0805 21:36:29.303127 3485 topology_manager.go:215] "Topology Admit Handler" podUID="a6643c05-1fab-4e2d-95ac-7c37bf40b91e" podNamespace="kube-system" podName="kube-proxy-fgkzs" Aug 5 21:36:29.331659 kubelet[3485]: I0805 21:36:29.331310 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k75d9\" (UniqueName: \"kubernetes.io/projected/a6643c05-1fab-4e2d-95ac-7c37bf40b91e-kube-api-access-k75d9\") pod \"kube-proxy-fgkzs\" (UID: \"a6643c05-1fab-4e2d-95ac-7c37bf40b91e\") " pod="kube-system/kube-proxy-fgkzs" Aug 5 21:36:29.331975 kubelet[3485]: I0805 21:36:29.331836 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a6643c05-1fab-4e2d-95ac-7c37bf40b91e-kube-proxy\") pod \"kube-proxy-fgkzs\" (UID: \"a6643c05-1fab-4e2d-95ac-7c37bf40b91e\") " pod="kube-system/kube-proxy-fgkzs" Aug 5 21:36:29.331975 kubelet[3485]: I0805 21:36:29.331900 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6643c05-1fab-4e2d-95ac-7c37bf40b91e-lib-modules\") pod \"kube-proxy-fgkzs\" (UID: \"a6643c05-1fab-4e2d-95ac-7c37bf40b91e\") " pod="kube-system/kube-proxy-fgkzs" Aug 5 21:36:29.331975 kubelet[3485]: I0805 21:36:29.331949 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6643c05-1fab-4e2d-95ac-7c37bf40b91e-xtables-lock\") pod \"kube-proxy-fgkzs\" (UID: \"a6643c05-1fab-4e2d-95ac-7c37bf40b91e\") " pod="kube-system/kube-proxy-fgkzs" Aug 5 21:36:29.341691 systemd[1]: Created slice kubepods-besteffort-poda6643c05_1fab_4e2d_95ac_7c37bf40b91e.slice - libcontainer container kubepods-besteffort-poda6643c05_1fab_4e2d_95ac_7c37bf40b91e.slice. Aug 5 21:36:29.497598 kubelet[3485]: I0805 21:36:29.496786 3485 topology_manager.go:215] "Topology Admit Handler" podUID="bc03b633-e8b1-4398-a19b-740d0f6573c9" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-9plx5" Aug 5 21:36:29.516049 systemd[1]: Created slice kubepods-besteffort-podbc03b633_e8b1_4398_a19b_740d0f6573c9.slice - libcontainer container kubepods-besteffort-podbc03b633_e8b1_4398_a19b_740d0f6573c9.slice. Aug 5 21:36:29.536973 kubelet[3485]: I0805 21:36:29.536914 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkrqn\" (UniqueName: \"kubernetes.io/projected/bc03b633-e8b1-4398-a19b-740d0f6573c9-kube-api-access-gkrqn\") pod \"tigera-operator-76c4974c85-9plx5\" (UID: \"bc03b633-e8b1-4398-a19b-740d0f6573c9\") " pod="tigera-operator/tigera-operator-76c4974c85-9plx5" Aug 5 21:36:29.537417 kubelet[3485]: I0805 21:36:29.537300 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc03b633-e8b1-4398-a19b-740d0f6573c9-var-lib-calico\") pod \"tigera-operator-76c4974c85-9plx5\" (UID: \"bc03b633-e8b1-4398-a19b-740d0f6573c9\") " pod="tigera-operator/tigera-operator-76c4974c85-9plx5" Aug 5 21:36:29.672891 containerd[2007]: time="2024-08-05T21:36:29.672615199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fgkzs,Uid:a6643c05-1fab-4e2d-95ac-7c37bf40b91e,Namespace:kube-system,Attempt:0,}" Aug 5 21:36:29.728173 containerd[2007]: time="2024-08-05T21:36:29.727902691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:36:29.728563 containerd[2007]: time="2024-08-05T21:36:29.728217175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:29.728563 containerd[2007]: time="2024-08-05T21:36:29.728337523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:36:29.728935 containerd[2007]: time="2024-08-05T21:36:29.728546059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:29.804005 systemd[1]: Started cri-containerd-d8ede166806468d1be399a3abed6ff6b237d20ef1688121a1380b36275e5616a.scope - libcontainer container d8ede166806468d1be399a3abed6ff6b237d20ef1688121a1380b36275e5616a. Aug 5 21:36:29.823798 containerd[2007]: time="2024-08-05T21:36:29.823715251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-9plx5,Uid:bc03b633-e8b1-4398-a19b-740d0f6573c9,Namespace:tigera-operator,Attempt:0,}" Aug 5 21:36:29.879498 containerd[2007]: time="2024-08-05T21:36:29.879317624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fgkzs,Uid:a6643c05-1fab-4e2d-95ac-7c37bf40b91e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8ede166806468d1be399a3abed6ff6b237d20ef1688121a1380b36275e5616a\"" Aug 5 21:36:29.908449 containerd[2007]: time="2024-08-05T21:36:29.907604468Z" level=info msg="CreateContainer within sandbox \"d8ede166806468d1be399a3abed6ff6b237d20ef1688121a1380b36275e5616a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 21:36:29.938747 containerd[2007]: time="2024-08-05T21:36:29.936665936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:36:29.938747 containerd[2007]: time="2024-08-05T21:36:29.936849152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:29.938747 containerd[2007]: time="2024-08-05T21:36:29.936890936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:36:29.938747 containerd[2007]: time="2024-08-05T21:36:29.936920024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:29.966451 containerd[2007]: time="2024-08-05T21:36:29.966173996Z" level=info msg="CreateContainer within sandbox \"d8ede166806468d1be399a3abed6ff6b237d20ef1688121a1380b36275e5616a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"abb3c386dd9c01d1a4dec9e86b5b062b3ad9882ccd5e62d5789a1202dcc64cda\"" Aug 5 21:36:29.971173 containerd[2007]: time="2024-08-05T21:36:29.969934856Z" level=info msg="StartContainer for \"abb3c386dd9c01d1a4dec9e86b5b062b3ad9882ccd5e62d5789a1202dcc64cda\"" Aug 5 21:36:29.992709 systemd[1]: Started cri-containerd-52f02618b273a8b4a159d7dbbcd03f25d74d44a1af11abcbe9711dcff2ef322e.scope - libcontainer container 52f02618b273a8b4a159d7dbbcd03f25d74d44a1af11abcbe9711dcff2ef322e. Aug 5 21:36:30.054361 systemd[1]: Started cri-containerd-abb3c386dd9c01d1a4dec9e86b5b062b3ad9882ccd5e62d5789a1202dcc64cda.scope - libcontainer container abb3c386dd9c01d1a4dec9e86b5b062b3ad9882ccd5e62d5789a1202dcc64cda. Aug 5 21:36:30.109998 containerd[2007]: time="2024-08-05T21:36:30.109934645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-9plx5,Uid:bc03b633-e8b1-4398-a19b-740d0f6573c9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"52f02618b273a8b4a159d7dbbcd03f25d74d44a1af11abcbe9711dcff2ef322e\"" Aug 5 21:36:30.118906 containerd[2007]: time="2024-08-05T21:36:30.118492169Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 21:36:30.176234 containerd[2007]: time="2024-08-05T21:36:30.175750253Z" level=info msg="StartContainer for \"abb3c386dd9c01d1a4dec9e86b5b062b3ad9882ccd5e62d5789a1202dcc64cda\" returns successfully" Aug 5 21:36:30.250549 kubelet[3485]: I0805 21:36:30.250298 3485 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fgkzs" podStartSLOduration=1.250219229 podCreationTimestamp="2024-08-05 21:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:36:30.250090109 +0000 UTC m=+13.592733560" watchObservedRunningTime="2024-08-05 21:36:30.250219229 +0000 UTC m=+13.592862656" Aug 5 21:36:30.474974 systemd[1]: run-containerd-runc-k8s.io-d8ede166806468d1be399a3abed6ff6b237d20ef1688121a1380b36275e5616a-runc.UmrXoj.mount: Deactivated successfully. Aug 5 21:36:31.561686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551847321.mount: Deactivated successfully. Aug 5 21:36:32.296810 containerd[2007]: time="2024-08-05T21:36:32.296599556Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:32.299761 containerd[2007]: time="2024-08-05T21:36:32.299576312Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473610" Aug 5 21:36:32.305360 containerd[2007]: time="2024-08-05T21:36:32.305240072Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:32.321290 containerd[2007]: time="2024-08-05T21:36:32.320165036Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:32.325757 containerd[2007]: time="2024-08-05T21:36:32.324532748Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.205809135s" Aug 5 21:36:32.325757 containerd[2007]: time="2024-08-05T21:36:32.324668204Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Aug 5 21:36:32.331433 containerd[2007]: time="2024-08-05T21:36:32.330895196Z" level=info msg="CreateContainer within sandbox \"52f02618b273a8b4a159d7dbbcd03f25d74d44a1af11abcbe9711dcff2ef322e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 21:36:32.371721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2925585900.mount: Deactivated successfully. Aug 5 21:36:32.392025 containerd[2007]: time="2024-08-05T21:36:32.391785296Z" level=info msg="CreateContainer within sandbox \"52f02618b273a8b4a159d7dbbcd03f25d74d44a1af11abcbe9711dcff2ef322e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"85aae7775eaaaf27f071b61e93ded43f2f934d7bcb9abe8ddc70f2951bfc11ac\"" Aug 5 21:36:32.394140 containerd[2007]: time="2024-08-05T21:36:32.393865580Z" level=info msg="StartContainer for \"85aae7775eaaaf27f071b61e93ded43f2f934d7bcb9abe8ddc70f2951bfc11ac\"" Aug 5 21:36:32.464047 systemd[1]: Started cri-containerd-85aae7775eaaaf27f071b61e93ded43f2f934d7bcb9abe8ddc70f2951bfc11ac.scope - libcontainer container 85aae7775eaaaf27f071b61e93ded43f2f934d7bcb9abe8ddc70f2951bfc11ac. Aug 5 21:36:32.493758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2034604789.mount: Deactivated successfully. Aug 5 21:36:32.552290 containerd[2007]: time="2024-08-05T21:36:32.551993025Z" level=info msg="StartContainer for \"85aae7775eaaaf27f071b61e93ded43f2f934d7bcb9abe8ddc70f2951bfc11ac\" returns successfully" Aug 5 21:36:33.260053 kubelet[3485]: I0805 21:36:33.259844 3485 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-9plx5" podStartSLOduration=2.047830697 podCreationTimestamp="2024-08-05 21:36:29 +0000 UTC" firstStartedPulling="2024-08-05 21:36:30.113178725 +0000 UTC m=+13.455822152" lastFinishedPulling="2024-08-05 21:36:32.325119344 +0000 UTC m=+15.667762771" observedRunningTime="2024-08-05 21:36:33.25890038 +0000 UTC m=+16.601543819" watchObservedRunningTime="2024-08-05 21:36:33.259771316 +0000 UTC m=+16.602414755" Aug 5 21:36:37.236852 kubelet[3485]: I0805 21:36:37.236742 3485 topology_manager.go:215] "Topology Admit Handler" podUID="f3b56607-db40-4011-b9a5-be30dd3ae142" podNamespace="calico-system" podName="calico-typha-76445c8dcc-d772d" Aug 5 21:36:37.266260 systemd[1]: Created slice kubepods-besteffort-podf3b56607_db40_4011_b9a5_be30dd3ae142.slice - libcontainer container kubepods-besteffort-podf3b56607_db40_4011_b9a5_be30dd3ae142.slice. Aug 5 21:36:37.295852 kubelet[3485]: I0805 21:36:37.294939 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3b56607-db40-4011-b9a5-be30dd3ae142-tigera-ca-bundle\") pod \"calico-typha-76445c8dcc-d772d\" (UID: \"f3b56607-db40-4011-b9a5-be30dd3ae142\") " pod="calico-system/calico-typha-76445c8dcc-d772d" Aug 5 21:36:37.295852 kubelet[3485]: I0805 21:36:37.295083 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fl4d\" (UniqueName: \"kubernetes.io/projected/f3b56607-db40-4011-b9a5-be30dd3ae142-kube-api-access-4fl4d\") pod \"calico-typha-76445c8dcc-d772d\" (UID: \"f3b56607-db40-4011-b9a5-be30dd3ae142\") " pod="calico-system/calico-typha-76445c8dcc-d772d" Aug 5 21:36:37.295852 kubelet[3485]: I0805 21:36:37.295161 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f3b56607-db40-4011-b9a5-be30dd3ae142-typha-certs\") pod \"calico-typha-76445c8dcc-d772d\" (UID: \"f3b56607-db40-4011-b9a5-be30dd3ae142\") " pod="calico-system/calico-typha-76445c8dcc-d772d" Aug 5 21:36:37.525126 kubelet[3485]: I0805 21:36:37.524940 3485 topology_manager.go:215] "Topology Admit Handler" podUID="4afa8650-b6d4-4542-b051-1a7baa9ffcaa" podNamespace="calico-system" podName="calico-node-rk4jp" Aug 5 21:36:37.548847 systemd[1]: Created slice kubepods-besteffort-pod4afa8650_b6d4_4542_b051_1a7baa9ffcaa.slice - libcontainer container kubepods-besteffort-pod4afa8650_b6d4_4542_b051_1a7baa9ffcaa.slice. Aug 5 21:36:37.581228 containerd[2007]: time="2024-08-05T21:36:37.581147498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76445c8dcc-d772d,Uid:f3b56607-db40-4011-b9a5-be30dd3ae142,Namespace:calico-system,Attempt:0,}" Aug 5 21:36:37.597732 kubelet[3485]: I0805 21:36:37.597653 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4afa8650-b6d4-4542-b051-1a7baa9ffcaa-xtables-lock\") pod \"calico-node-rk4jp\" (UID: \"4afa8650-b6d4-4542-b051-1a7baa9ffcaa\") " pod="calico-system/calico-node-rk4jp" Aug 5 21:36:37.597961 kubelet[3485]: I0805 21:36:37.597746 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4afa8650-b6d4-4542-b051-1a7baa9ffcaa-var-lib-calico\") pod \"calico-node-rk4jp\" (UID: \"4afa8650-b6d4-4542-b051-1a7baa9ffcaa\") " pod="calico-system/calico-node-rk4jp" Aug 5 21:36:37.597961 kubelet[3485]: I0805 21:36:37.597800 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4afa8650-b6d4-4542-b051-1a7baa9ffcaa-node-certs\") pod \"calico-node-rk4jp\" (UID: \"4afa8650-b6d4-4542-b051-1a7baa9ffcaa\") " pod="calico-system/calico-node-rk4jp" Aug 5 21:36:37.597961 kubelet[3485]: I0805 21:36:37.597845 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4afa8650-b6d4-4542-b051-1a7baa9ffcaa-cni-bin-dir\") pod \"calico-node-rk4jp\" (UID: \"4afa8650-b6d4-4542-b051-1a7baa9ffcaa\") " pod="calico-system/calico-node-rk4jp" Aug 5 21:36:37.597961 kubelet[3485]: I0805 21:36:37.597900 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4afa8650-b6d4-4542-b051-1a7baa9ffcaa-cni-log-dir\") pod \"calico-node-rk4jp\" (UID: \"4afa8650-b6d4-4542-b051-1a7baa9ffcaa\") " pod="calico-system/calico-node-rk4jp" Aug 5 21:36:37.597961 kubelet[3485]: I0805 21:36:37.597944 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4afa8650-b6d4-4542-b051-1a7baa9ffcaa-lib-modules\") pod \"calico-node-rk4jp\" (UID: \"4afa8650-b6d4-4542-b051-1a7baa9ffcaa\") " pod="calico-system/calico-node-rk4jp" Aug 5 21:36:37.598443 kubelet[3485]: I0805 21:36:37.597985 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4afa8650-b6d4-4542-b051-1a7baa9ffcaa-policysync\") pod \"calico-node-rk4jp\" (UID: \"4afa8650-b6d4-4542-b051-1a7baa9ffcaa\") " pod="calico-system/calico-node-rk4jp" Aug 5 21:36:37.598443 kubelet[3485]: I0805 21:36:37.598034 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4afa8650-b6d4-4542-b051-1a7baa9ffcaa-var-run-calico\") pod \"calico-node-rk4jp\" (UID: \"4afa8650-b6d4-4542-b051-1a7baa9ffcaa\") " pod="calico-system/calico-node-rk4jp" Aug 5 21:36:37.598443 kubelet[3485]: I0805 21:36:37.598089 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4afa8650-b6d4-4542-b051-1a7baa9ffcaa-cni-net-dir\") pod \"calico-node-rk4jp\" (UID: \"4afa8650-b6d4-4542-b051-1a7baa9ffcaa\") " pod="calico-system/calico-node-rk4jp" Aug 5 21:36:37.598443 kubelet[3485]: I0805 21:36:37.598139 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmwmj\" (UniqueName: \"kubernetes.io/projected/4afa8650-b6d4-4542-b051-1a7baa9ffcaa-kube-api-access-jmwmj\") pod \"calico-node-rk4jp\" (UID: \"4afa8650-b6d4-4542-b051-1a7baa9ffcaa\") " pod="calico-system/calico-node-rk4jp" Aug 5 21:36:37.599724 kubelet[3485]: I0805 21:36:37.599619 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afa8650-b6d4-4542-b051-1a7baa9ffcaa-tigera-ca-bundle\") pod \"calico-node-rk4jp\" (UID: \"4afa8650-b6d4-4542-b051-1a7baa9ffcaa\") " pod="calico-system/calico-node-rk4jp" Aug 5 21:36:37.599844 kubelet[3485]: I0805 21:36:37.599787 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4afa8650-b6d4-4542-b051-1a7baa9ffcaa-flexvol-driver-host\") pod \"calico-node-rk4jp\" (UID: \"4afa8650-b6d4-4542-b051-1a7baa9ffcaa\") " pod="calico-system/calico-node-rk4jp" Aug 5 21:36:37.652250 containerd[2007]: time="2024-08-05T21:36:37.652063658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:36:37.657879 containerd[2007]: time="2024-08-05T21:36:37.657421142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:37.659474 containerd[2007]: time="2024-08-05T21:36:37.657647666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:36:37.659474 containerd[2007]: time="2024-08-05T21:36:37.657747122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:37.736430 kubelet[3485]: E0805 21:36:37.736101 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.736430 kubelet[3485]: W0805 21:36:37.736155 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.736430 kubelet[3485]: E0805 21:36:37.736237 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.758072 kubelet[3485]: E0805 21:36:37.757621 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.758072 kubelet[3485]: W0805 21:36:37.757667 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.758072 kubelet[3485]: E0805 21:36:37.757712 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.758818 systemd[1]: Started cri-containerd-fed65df9aacf2f1268514bf0000d7aca540b2a0f679d0fc15babdecfcae2fe37.scope - libcontainer container fed65df9aacf2f1268514bf0000d7aca540b2a0f679d0fc15babdecfcae2fe37. Aug 5 21:36:37.764595 kubelet[3485]: I0805 21:36:37.764513 3485 topology_manager.go:215] "Topology Admit Handler" podUID="8fc31815-4413-45c7-b4f1-d969a93d2abe" podNamespace="calico-system" podName="csi-node-driver-6rk7q" Aug 5 21:36:37.772206 kubelet[3485]: E0805 21:36:37.771470 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rk7q" podUID="8fc31815-4413-45c7-b4f1-d969a93d2abe" Aug 5 21:36:37.789561 kubelet[3485]: E0805 21:36:37.789232 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.789561 kubelet[3485]: W0805 21:36:37.789300 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.789815 kubelet[3485]: E0805 21:36:37.789661 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.809510 kubelet[3485]: E0805 21:36:37.809444 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.809510 kubelet[3485]: W0805 21:36:37.809488 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.809847 kubelet[3485]: E0805 21:36:37.809555 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.815297 kubelet[3485]: E0805 21:36:37.814311 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.815297 kubelet[3485]: W0805 21:36:37.814352 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.815297 kubelet[3485]: E0805 21:36:37.814413 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.815774 kubelet[3485]: E0805 21:36:37.815593 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.815774 kubelet[3485]: W0805 21:36:37.815629 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.815774 kubelet[3485]: E0805 21:36:37.815673 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.818186 kubelet[3485]: E0805 21:36:37.817840 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.818186 kubelet[3485]: W0805 21:36:37.817890 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.818186 kubelet[3485]: E0805 21:36:37.817934 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.820855 kubelet[3485]: E0805 21:36:37.820772 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.820855 kubelet[3485]: W0805 21:36:37.820815 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.820855 kubelet[3485]: E0805 21:36:37.820854 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.823811 kubelet[3485]: E0805 21:36:37.823567 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.823811 kubelet[3485]: W0805 21:36:37.823623 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.823811 kubelet[3485]: E0805 21:36:37.823674 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.825695 kubelet[3485]: E0805 21:36:37.825615 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.825695 kubelet[3485]: W0805 21:36:37.825676 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.828328 kubelet[3485]: E0805 21:36:37.827884 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.828624 kubelet[3485]: E0805 21:36:37.828241 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.828624 kubelet[3485]: W0805 21:36:37.828469 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.828624 kubelet[3485]: E0805 21:36:37.828515 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.831455 kubelet[3485]: E0805 21:36:37.830768 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.831455 kubelet[3485]: W0805 21:36:37.830832 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.831455 kubelet[3485]: E0805 21:36:37.830890 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.832363 kubelet[3485]: E0805 21:36:37.832274 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.832363 kubelet[3485]: W0805 21:36:37.832329 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.833672 kubelet[3485]: E0805 21:36:37.832395 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.834106 kubelet[3485]: E0805 21:36:37.834015 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.834106 kubelet[3485]: W0805 21:36:37.834089 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.834285 kubelet[3485]: E0805 21:36:37.834153 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.836454 kubelet[3485]: E0805 21:36:37.835306 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.836454 kubelet[3485]: W0805 21:36:37.835352 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.836454 kubelet[3485]: E0805 21:36:37.835454 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.836454 kubelet[3485]: E0805 21:36:37.836361 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.836454 kubelet[3485]: W0805 21:36:37.836458 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.837002 kubelet[3485]: E0805 21:36:37.836520 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.837060 kubelet[3485]: E0805 21:36:37.837022 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.837060 kubelet[3485]: W0805 21:36:37.837040 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.837161 kubelet[3485]: E0805 21:36:37.837068 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.837525 kubelet[3485]: E0805 21:36:37.837479 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.837525 kubelet[3485]: W0805 21:36:37.837514 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.837686 kubelet[3485]: E0805 21:36:37.837547 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.838794 kubelet[3485]: E0805 21:36:37.838054 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.838794 kubelet[3485]: W0805 21:36:37.838088 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.838794 kubelet[3485]: E0805 21:36:37.838119 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.840021 kubelet[3485]: E0805 21:36:37.839697 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.840021 kubelet[3485]: W0805 21:36:37.839745 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.840021 kubelet[3485]: E0805 21:36:37.839805 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.841077 kubelet[3485]: E0805 21:36:37.840755 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.841077 kubelet[3485]: W0805 21:36:37.840801 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.841077 kubelet[3485]: E0805 21:36:37.840843 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.842066 kubelet[3485]: E0805 21:36:37.841777 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.842066 kubelet[3485]: W0805 21:36:37.841813 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.842066 kubelet[3485]: E0805 21:36:37.841855 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.843293 kubelet[3485]: E0805 21:36:37.842719 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.843293 kubelet[3485]: W0805 21:36:37.842765 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.843293 kubelet[3485]: E0805 21:36:37.842807 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.844950 kubelet[3485]: E0805 21:36:37.844472 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.844950 kubelet[3485]: W0805 21:36:37.844523 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.844950 kubelet[3485]: E0805 21:36:37.844581 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.844950 kubelet[3485]: I0805 21:36:37.844664 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvxkr\" (UniqueName: \"kubernetes.io/projected/8fc31815-4413-45c7-b4f1-d969a93d2abe-kube-api-access-rvxkr\") pod \"csi-node-driver-6rk7q\" (UID: \"8fc31815-4413-45c7-b4f1-d969a93d2abe\") " pod="calico-system/csi-node-driver-6rk7q" Aug 5 21:36:37.845863 kubelet[3485]: E0805 21:36:37.845794 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.845863 kubelet[3485]: W0805 21:36:37.845826 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.846985 kubelet[3485]: E0805 21:36:37.846683 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.846985 kubelet[3485]: I0805 21:36:37.846847 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8fc31815-4413-45c7-b4f1-d969a93d2abe-varrun\") pod \"csi-node-driver-6rk7q\" (UID: \"8fc31815-4413-45c7-b4f1-d969a93d2abe\") " pod="calico-system/csi-node-driver-6rk7q" Aug 5 21:36:37.848416 kubelet[3485]: E0805 21:36:37.847901 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.848416 kubelet[3485]: W0805 21:36:37.847932 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.848416 kubelet[3485]: E0805 21:36:37.847978 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.849896 kubelet[3485]: E0805 21:36:37.849196 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.849896 kubelet[3485]: W0805 21:36:37.849258 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.849896 kubelet[3485]: E0805 21:36:37.849330 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.853531 kubelet[3485]: E0805 21:36:37.853477 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.854535 kubelet[3485]: W0805 21:36:37.853724 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.854535 kubelet[3485]: E0805 21:36:37.853919 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.854535 kubelet[3485]: I0805 21:36:37.854176 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8fc31815-4413-45c7-b4f1-d969a93d2abe-kubelet-dir\") pod \"csi-node-driver-6rk7q\" (UID: \"8fc31815-4413-45c7-b4f1-d969a93d2abe\") " pod="calico-system/csi-node-driver-6rk7q" Aug 5 21:36:37.857754 kubelet[3485]: E0805 21:36:37.855635 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.857754 kubelet[3485]: W0805 21:36:37.857483 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.859745 kubelet[3485]: E0805 21:36:37.859386 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.859745 kubelet[3485]: W0805 21:36:37.859428 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.862062 kubelet[3485]: E0805 21:36:37.861760 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.862062 kubelet[3485]: W0805 21:36:37.861803 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.862062 kubelet[3485]: E0805 21:36:37.861848 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.863558 kubelet[3485]: E0805 21:36:37.862660 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.863558 kubelet[3485]: I0805 21:36:37.862769 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8fc31815-4413-45c7-b4f1-d969a93d2abe-registration-dir\") pod \"csi-node-driver-6rk7q\" (UID: \"8fc31815-4413-45c7-b4f1-d969a93d2abe\") " pod="calico-system/csi-node-driver-6rk7q" Aug 5 21:36:37.863558 kubelet[3485]: E0805 21:36:37.862808 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.863558 kubelet[3485]: E0805 21:36:37.862958 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.863558 kubelet[3485]: W0805 21:36:37.862979 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.863558 kubelet[3485]: E0805 21:36:37.863021 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.867327 kubelet[3485]: E0805 21:36:37.867286 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.868173 kubelet[3485]: W0805 21:36:37.867591 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.868173 kubelet[3485]: E0805 21:36:37.867861 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.869893 kubelet[3485]: E0805 21:36:37.869632 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.869893 kubelet[3485]: W0805 21:36:37.869664 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.869893 kubelet[3485]: E0805 21:36:37.869702 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.873420 kubelet[3485]: E0805 21:36:37.871449 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.873420 kubelet[3485]: W0805 21:36:37.871488 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.873420 kubelet[3485]: E0805 21:36:37.871530 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.873420 kubelet[3485]: I0805 21:36:37.871623 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8fc31815-4413-45c7-b4f1-d969a93d2abe-socket-dir\") pod \"csi-node-driver-6rk7q\" (UID: \"8fc31815-4413-45c7-b4f1-d969a93d2abe\") " pod="calico-system/csi-node-driver-6rk7q" Aug 5 21:36:37.875460 kubelet[3485]: E0805 21:36:37.875417 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.875816 kubelet[3485]: W0805 21:36:37.875775 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.875971 kubelet[3485]: E0805 21:36:37.875948 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.877321 kubelet[3485]: E0805 21:36:37.876872 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.877321 kubelet[3485]: W0805 21:36:37.876923 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.877321 kubelet[3485]: E0805 21:36:37.876979 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.877710 containerd[2007]: time="2024-08-05T21:36:37.877597791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rk4jp,Uid:4afa8650-b6d4-4542-b051-1a7baa9ffcaa,Namespace:calico-system,Attempt:0,}" Aug 5 21:36:37.880154 kubelet[3485]: E0805 21:36:37.879978 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.880154 kubelet[3485]: W0805 21:36:37.880019 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.880154 kubelet[3485]: E0805 21:36:37.880073 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.978685 kubelet[3485]: E0805 21:36:37.977708 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.978685 kubelet[3485]: W0805 21:36:37.977748 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.978685 kubelet[3485]: E0805 21:36:37.977790 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.979651 containerd[2007]: time="2024-08-05T21:36:37.978244540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:36:37.986415 kubelet[3485]: E0805 21:36:37.983760 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.986415 kubelet[3485]: W0805 21:36:37.983818 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.986415 kubelet[3485]: E0805 21:36:37.983888 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.986765 containerd[2007]: time="2024-08-05T21:36:37.978358108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:37.987634 kubelet[3485]: E0805 21:36:37.987316 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.987634 kubelet[3485]: W0805 21:36:37.987352 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.987634 kubelet[3485]: E0805 21:36:37.987419 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.992706 kubelet[3485]: E0805 21:36:37.992640 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.993311 kubelet[3485]: W0805 21:36:37.992955 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.993311 kubelet[3485]: E0805 21:36:37.993254 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:37.994215 kubelet[3485]: E0805 21:36:37.993619 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:37.994215 kubelet[3485]: W0805 21:36:37.993641 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:37.994657 containerd[2007]: time="2024-08-05T21:36:37.988832728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:36:37.994657 containerd[2007]: time="2024-08-05T21:36:37.988880284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:37.998469 kubelet[3485]: E0805 21:36:37.995506 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.000412 kubelet[3485]: E0805 21:36:37.999647 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.001393 kubelet[3485]: W0805 21:36:38.000744 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.001393 kubelet[3485]: E0805 21:36:38.000867 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.006128 kubelet[3485]: E0805 21:36:38.005735 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.006128 kubelet[3485]: W0805 21:36:38.005772 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.006128 kubelet[3485]: E0805 21:36:38.005843 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.010193 kubelet[3485]: E0805 21:36:38.010109 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.010650 kubelet[3485]: W0805 21:36:38.010606 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.010864 kubelet[3485]: E0805 21:36:38.010823 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.011468 kubelet[3485]: E0805 21:36:38.011438 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.011624 kubelet[3485]: W0805 21:36:38.011599 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.012607 kubelet[3485]: E0805 21:36:38.012085 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.015911 kubelet[3485]: E0805 21:36:38.013813 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.015911 kubelet[3485]: W0805 21:36:38.013846 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.015911 kubelet[3485]: E0805 21:36:38.013890 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.016824 kubelet[3485]: E0805 21:36:38.016547 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.016824 kubelet[3485]: W0805 21:36:38.016581 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.016824 kubelet[3485]: E0805 21:36:38.016620 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.018534 kubelet[3485]: E0805 21:36:38.017982 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.018534 kubelet[3485]: W0805 21:36:38.018057 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.018913 kubelet[3485]: E0805 21:36:38.018876 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.019339 kubelet[3485]: E0805 21:36:38.019317 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.019602 kubelet[3485]: W0805 21:36:38.019485 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.019717 kubelet[3485]: E0805 21:36:38.019697 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.021493 kubelet[3485]: E0805 21:36:38.020171 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.021493 kubelet[3485]: W0805 21:36:38.020203 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.021913 kubelet[3485]: E0805 21:36:38.021743 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.022834 kubelet[3485]: E0805 21:36:38.022568 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.022834 kubelet[3485]: W0805 21:36:38.022599 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.023493 kubelet[3485]: E0805 21:36:38.023461 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.026453 kubelet[3485]: E0805 21:36:38.023786 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.026453 kubelet[3485]: W0805 21:36:38.023808 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.026798 kubelet[3485]: E0805 21:36:38.026673 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.027732 kubelet[3485]: E0805 21:36:38.027485 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.027732 kubelet[3485]: W0805 21:36:38.027526 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.028056 kubelet[3485]: E0805 21:36:38.027993 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.028337 kubelet[3485]: E0805 21:36:38.028259 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.028337 kubelet[3485]: W0805 21:36:38.028302 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.028758 kubelet[3485]: E0805 21:36:38.028589 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.029175 kubelet[3485]: E0805 21:36:38.029145 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.029409 kubelet[3485]: W0805 21:36:38.029274 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.029568 kubelet[3485]: E0805 21:36:38.029542 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.030309 kubelet[3485]: E0805 21:36:38.030131 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.030309 kubelet[3485]: W0805 21:36:38.030157 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.030799 kubelet[3485]: E0805 21:36:38.030562 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.031118 kubelet[3485]: E0805 21:36:38.031081 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.032157 kubelet[3485]: W0805 21:36:38.031932 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.032341 kubelet[3485]: E0805 21:36:38.032316 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.032865 kubelet[3485]: E0805 21:36:38.032806 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.032865 kubelet[3485]: W0805 21:36:38.032833 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.035025 kubelet[3485]: E0805 21:36:38.034834 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.035813 kubelet[3485]: E0805 21:36:38.035642 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.035813 kubelet[3485]: W0805 21:36:38.035685 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.035813 kubelet[3485]: E0805 21:36:38.035763 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.038832 kubelet[3485]: E0805 21:36:38.038600 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.038832 kubelet[3485]: W0805 21:36:38.038635 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.038832 kubelet[3485]: E0805 21:36:38.038679 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.044964 kubelet[3485]: E0805 21:36:38.039478 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.044964 kubelet[3485]: W0805 21:36:38.039516 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.044964 kubelet[3485]: E0805 21:36:38.039558 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.079485 systemd[1]: Started cri-containerd-6344f162600e6a4aaff614367cb06a3ae5fdb1af9244a511e6f09c8c66b0d06c.scope - libcontainer container 6344f162600e6a4aaff614367cb06a3ae5fdb1af9244a511e6f09c8c66b0d06c. Aug 5 21:36:38.111691 kubelet[3485]: E0805 21:36:38.111643 3485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:36:38.112000 kubelet[3485]: W0805 21:36:38.111955 3485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:36:38.112448 kubelet[3485]: E0805 21:36:38.112214 3485 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:36:38.404798 containerd[2007]: time="2024-08-05T21:36:38.404655026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rk4jp,Uid:4afa8650-b6d4-4542-b051-1a7baa9ffcaa,Namespace:calico-system,Attempt:0,} returns sandbox id \"6344f162600e6a4aaff614367cb06a3ae5fdb1af9244a511e6f09c8c66b0d06c\"" Aug 5 21:36:38.419831 containerd[2007]: time="2024-08-05T21:36:38.419782238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 21:36:38.520195 containerd[2007]: time="2024-08-05T21:36:38.519484887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76445c8dcc-d772d,Uid:f3b56607-db40-4011-b9a5-be30dd3ae142,Namespace:calico-system,Attempt:0,} returns sandbox id \"fed65df9aacf2f1268514bf0000d7aca540b2a0f679d0fc15babdecfcae2fe37\"" Aug 5 21:36:40.041299 containerd[2007]: time="2024-08-05T21:36:40.040533158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:40.044285 containerd[2007]: time="2024-08-05T21:36:40.044177594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Aug 5 21:36:40.045713 containerd[2007]: time="2024-08-05T21:36:40.045587954Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:40.052318 containerd[2007]: time="2024-08-05T21:36:40.052230374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:40.054012 containerd[2007]: time="2024-08-05T21:36:40.053855042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.62957938s" Aug 5 21:36:40.054012 containerd[2007]: time="2024-08-05T21:36:40.053945582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Aug 5 21:36:40.056472 containerd[2007]: time="2024-08-05T21:36:40.055743122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 21:36:40.061881 kubelet[3485]: E0805 21:36:40.061279 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rk7q" podUID="8fc31815-4413-45c7-b4f1-d969a93d2abe" Aug 5 21:36:40.066762 containerd[2007]: time="2024-08-05T21:36:40.066621734Z" level=info msg="CreateContainer within sandbox \"6344f162600e6a4aaff614367cb06a3ae5fdb1af9244a511e6f09c8c66b0d06c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 21:36:40.116416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4105589435.mount: Deactivated successfully. Aug 5 21:36:40.129563 containerd[2007]: time="2024-08-05T21:36:40.129444555Z" level=info msg="CreateContainer within sandbox \"6344f162600e6a4aaff614367cb06a3ae5fdb1af9244a511e6f09c8c66b0d06c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8e2a065f5c719d54009543ca6eb8556b86eb6244e582c845b597c55153da104f\"" Aug 5 21:36:40.131861 containerd[2007]: time="2024-08-05T21:36:40.131776251Z" level=info msg="StartContainer for \"8e2a065f5c719d54009543ca6eb8556b86eb6244e582c845b597c55153da104f\"" Aug 5 21:36:40.250780 systemd[1]: Started cri-containerd-8e2a065f5c719d54009543ca6eb8556b86eb6244e582c845b597c55153da104f.scope - libcontainer container 8e2a065f5c719d54009543ca6eb8556b86eb6244e582c845b597c55153da104f. Aug 5 21:36:40.430578 containerd[2007]: time="2024-08-05T21:36:40.430510768Z" level=info msg="StartContainer for \"8e2a065f5c719d54009543ca6eb8556b86eb6244e582c845b597c55153da104f\" returns successfully" Aug 5 21:36:40.497121 systemd[1]: cri-containerd-8e2a065f5c719d54009543ca6eb8556b86eb6244e582c845b597c55153da104f.scope: Deactivated successfully. Aug 5 21:36:40.586814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e2a065f5c719d54009543ca6eb8556b86eb6244e582c845b597c55153da104f-rootfs.mount: Deactivated successfully. Aug 5 21:36:40.850172 containerd[2007]: time="2024-08-05T21:36:40.849579462Z" level=info msg="shim disconnected" id=8e2a065f5c719d54009543ca6eb8556b86eb6244e582c845b597c55153da104f namespace=k8s.io Aug 5 21:36:40.850172 containerd[2007]: time="2024-08-05T21:36:40.849661170Z" level=warning msg="cleaning up after shim disconnected" id=8e2a065f5c719d54009543ca6eb8556b86eb6244e582c845b597c55153da104f namespace=k8s.io Aug 5 21:36:40.850172 containerd[2007]: time="2024-08-05T21:36:40.849683718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:36:42.063010 kubelet[3485]: E0805 21:36:42.062770 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rk7q" podUID="8fc31815-4413-45c7-b4f1-d969a93d2abe" Aug 5 21:36:43.090098 containerd[2007]: time="2024-08-05T21:36:43.089891777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:43.093989 containerd[2007]: time="2024-08-05T21:36:43.091981925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Aug 5 21:36:43.097432 containerd[2007]: time="2024-08-05T21:36:43.096919877Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:43.113212 containerd[2007]: time="2024-08-05T21:36:43.112782473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:43.124070 containerd[2007]: time="2024-08-05T21:36:43.123963353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 3.068120607s" Aug 5 21:36:43.124070 containerd[2007]: time="2024-08-05T21:36:43.124062737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Aug 5 21:36:43.125927 containerd[2007]: time="2024-08-05T21:36:43.124936601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 21:36:43.182973 containerd[2007]: time="2024-08-05T21:36:43.181667214Z" level=info msg="CreateContainer within sandbox \"fed65df9aacf2f1268514bf0000d7aca540b2a0f679d0fc15babdecfcae2fe37\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 21:36:43.225632 containerd[2007]: time="2024-08-05T21:36:43.225541866Z" level=info msg="CreateContainer within sandbox \"fed65df9aacf2f1268514bf0000d7aca540b2a0f679d0fc15babdecfcae2fe37\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"05941eb4814f2c675fd33ca8ef078a4dc6ae3ecd8880a0b5972e69a4b0d928df\"" Aug 5 21:36:43.228887 containerd[2007]: time="2024-08-05T21:36:43.228552126Z" level=info msg="StartContainer for \"05941eb4814f2c675fd33ca8ef078a4dc6ae3ecd8880a0b5972e69a4b0d928df\"" Aug 5 21:36:43.314624 systemd[1]: Started cri-containerd-05941eb4814f2c675fd33ca8ef078a4dc6ae3ecd8880a0b5972e69a4b0d928df.scope - libcontainer container 05941eb4814f2c675fd33ca8ef078a4dc6ae3ecd8880a0b5972e69a4b0d928df. Aug 5 21:36:43.470835 containerd[2007]: time="2024-08-05T21:36:43.470319475Z" level=info msg="StartContainer for \"05941eb4814f2c675fd33ca8ef078a4dc6ae3ecd8880a0b5972e69a4b0d928df\" returns successfully" Aug 5 21:36:44.063926 kubelet[3485]: E0805 21:36:44.063099 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rk7q" podUID="8fc31815-4413-45c7-b4f1-d969a93d2abe" Aug 5 21:36:44.378130 kubelet[3485]: I0805 21:36:44.376859 3485 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-76445c8dcc-d772d" podStartSLOduration=2.77950095 podCreationTimestamp="2024-08-05 21:36:37 +0000 UTC" firstStartedPulling="2024-08-05 21:36:38.527506767 +0000 UTC m=+21.870150194" lastFinishedPulling="2024-08-05 21:36:43.124717073 +0000 UTC m=+26.467360512" observedRunningTime="2024-08-05 21:36:44.37124462 +0000 UTC m=+27.713888047" watchObservedRunningTime="2024-08-05 21:36:44.376711268 +0000 UTC m=+27.719354875" Aug 5 21:36:45.358150 kubelet[3485]: I0805 21:36:45.356467 3485 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:36:46.062992 kubelet[3485]: E0805 21:36:46.062912 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rk7q" podUID="8fc31815-4413-45c7-b4f1-d969a93d2abe" Aug 5 21:36:48.061640 kubelet[3485]: E0805 21:36:48.061594 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rk7q" podUID="8fc31815-4413-45c7-b4f1-d969a93d2abe" Aug 5 21:36:48.482315 containerd[2007]: time="2024-08-05T21:36:48.481830084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:48.483959 containerd[2007]: time="2024-08-05T21:36:48.483878796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Aug 5 21:36:48.485588 containerd[2007]: time="2024-08-05T21:36:48.485508816Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:48.492904 containerd[2007]: time="2024-08-05T21:36:48.492300600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:48.494074 containerd[2007]: time="2024-08-05T21:36:48.493984164Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 5.368963695s" Aug 5 21:36:48.494252 containerd[2007]: time="2024-08-05T21:36:48.494078664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Aug 5 21:36:48.500976 containerd[2007]: time="2024-08-05T21:36:48.500755308Z" level=info msg="CreateContainer within sandbox \"6344f162600e6a4aaff614367cb06a3ae5fdb1af9244a511e6f09c8c66b0d06c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 21:36:48.533362 containerd[2007]: time="2024-08-05T21:36:48.533121624Z" level=info msg="CreateContainer within sandbox \"6344f162600e6a4aaff614367cb06a3ae5fdb1af9244a511e6f09c8c66b0d06c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3c62d6e6994af6cf416b3ac634e8dce7d99298f041ab2b12d9aa16df19e8fa4c\"" Aug 5 21:36:48.534474 containerd[2007]: time="2024-08-05T21:36:48.534355680Z" level=info msg="StartContainer for \"3c62d6e6994af6cf416b3ac634e8dce7d99298f041ab2b12d9aa16df19e8fa4c\"" Aug 5 21:36:48.618886 systemd[1]: Started cri-containerd-3c62d6e6994af6cf416b3ac634e8dce7d99298f041ab2b12d9aa16df19e8fa4c.scope - libcontainer container 3c62d6e6994af6cf416b3ac634e8dce7d99298f041ab2b12d9aa16df19e8fa4c. Aug 5 21:36:48.702786 containerd[2007]: time="2024-08-05T21:36:48.702613993Z" level=info msg="StartContainer for \"3c62d6e6994af6cf416b3ac634e8dce7d99298f041ab2b12d9aa16df19e8fa4c\" returns successfully" Aug 5 21:36:49.728422 containerd[2007]: time="2024-08-05T21:36:49.727533218Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:36:49.734437 systemd[1]: cri-containerd-3c62d6e6994af6cf416b3ac634e8dce7d99298f041ab2b12d9aa16df19e8fa4c.scope: Deactivated successfully. Aug 5 21:36:49.785578 kubelet[3485]: I0805 21:36:49.783205 3485 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 5 21:36:49.827556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c62d6e6994af6cf416b3ac634e8dce7d99298f041ab2b12d9aa16df19e8fa4c-rootfs.mount: Deactivated successfully. Aug 5 21:36:49.857390 kubelet[3485]: I0805 21:36:49.857040 3485 topology_manager.go:215] "Topology Admit Handler" podUID="39e5d80d-eef5-4d09-a308-7bc48d19044c" podNamespace="kube-system" podName="coredns-5dd5756b68-mffms" Aug 5 21:36:49.863190 kubelet[3485]: I0805 21:36:49.863008 3485 topology_manager.go:215] "Topology Admit Handler" podUID="a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48" podNamespace="kube-system" podName="coredns-5dd5756b68-pkkts" Aug 5 21:36:49.896410 kubelet[3485]: I0805 21:36:49.893753 3485 topology_manager.go:215] "Topology Admit Handler" podUID="85d380da-4d3c-44a8-b1e4-555530171664" podNamespace="calico-system" podName="calico-kube-controllers-6d79b48bbd-jq6mz" Aug 5 21:36:49.896534 systemd[1]: Created slice kubepods-burstable-pod39e5d80d_eef5_4d09_a308_7bc48d19044c.slice - libcontainer container kubepods-burstable-pod39e5d80d_eef5_4d09_a308_7bc48d19044c.slice. Aug 5 21:36:49.931089 systemd[1]: Created slice kubepods-burstable-poda55bb9e9_fa9b_4cb6_8143_9c7be22a3c48.slice - libcontainer container kubepods-burstable-poda55bb9e9_fa9b_4cb6_8143_9c7be22a3c48.slice. Aug 5 21:36:49.939687 kubelet[3485]: I0805 21:36:49.938525 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39e5d80d-eef5-4d09-a308-7bc48d19044c-config-volume\") pod \"coredns-5dd5756b68-mffms\" (UID: \"39e5d80d-eef5-4d09-a308-7bc48d19044c\") " pod="kube-system/coredns-5dd5756b68-mffms" Aug 5 21:36:49.939687 kubelet[3485]: I0805 21:36:49.938609 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48-config-volume\") pod \"coredns-5dd5756b68-pkkts\" (UID: \"a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48\") " pod="kube-system/coredns-5dd5756b68-pkkts" Aug 5 21:36:49.939687 kubelet[3485]: I0805 21:36:49.938662 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggjxv\" (UniqueName: \"kubernetes.io/projected/39e5d80d-eef5-4d09-a308-7bc48d19044c-kube-api-access-ggjxv\") pod \"coredns-5dd5756b68-mffms\" (UID: \"39e5d80d-eef5-4d09-a308-7bc48d19044c\") " pod="kube-system/coredns-5dd5756b68-mffms" Aug 5 21:36:49.939687 kubelet[3485]: I0805 21:36:49.938744 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrgvz\" (UniqueName: \"kubernetes.io/projected/a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48-kube-api-access-wrgvz\") pod \"coredns-5dd5756b68-pkkts\" (UID: \"a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48\") " pod="kube-system/coredns-5dd5756b68-pkkts" Aug 5 21:36:49.957048 systemd[1]: Created slice kubepods-besteffort-pod85d380da_4d3c_44a8_b1e4_555530171664.slice - libcontainer container kubepods-besteffort-pod85d380da_4d3c_44a8_b1e4_555530171664.slice. Aug 5 21:36:50.045672 kubelet[3485]: I0805 21:36:50.039883 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85d380da-4d3c-44a8-b1e4-555530171664-tigera-ca-bundle\") pod \"calico-kube-controllers-6d79b48bbd-jq6mz\" (UID: \"85d380da-4d3c-44a8-b1e4-555530171664\") " pod="calico-system/calico-kube-controllers-6d79b48bbd-jq6mz" Aug 5 21:36:50.045672 kubelet[3485]: I0805 21:36:50.042470 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4jlc\" (UniqueName: \"kubernetes.io/projected/85d380da-4d3c-44a8-b1e4-555530171664-kube-api-access-g4jlc\") pod \"calico-kube-controllers-6d79b48bbd-jq6mz\" (UID: \"85d380da-4d3c-44a8-b1e4-555530171664\") " pod="calico-system/calico-kube-controllers-6d79b48bbd-jq6mz" Aug 5 21:36:50.115790 systemd[1]: Created slice kubepods-besteffort-pod8fc31815_4413_45c7_b4f1_d969a93d2abe.slice - libcontainer container kubepods-besteffort-pod8fc31815_4413_45c7_b4f1_d969a93d2abe.slice. Aug 5 21:36:50.124963 containerd[2007]: time="2024-08-05T21:36:50.124868220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rk7q,Uid:8fc31815-4413-45c7-b4f1-d969a93d2abe,Namespace:calico-system,Attempt:0,}" Aug 5 21:36:50.210630 containerd[2007]: time="2024-08-05T21:36:50.210464749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-mffms,Uid:39e5d80d-eef5-4d09-a308-7bc48d19044c,Namespace:kube-system,Attempt:0,}" Aug 5 21:36:50.247553 containerd[2007]: time="2024-08-05T21:36:50.247484365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pkkts,Uid:a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48,Namespace:kube-system,Attempt:0,}" Aug 5 21:36:50.265926 containerd[2007]: time="2024-08-05T21:36:50.265807009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d79b48bbd-jq6mz,Uid:85d380da-4d3c-44a8-b1e4-555530171664,Namespace:calico-system,Attempt:0,}" Aug 5 21:36:50.573214 containerd[2007]: time="2024-08-05T21:36:50.573042434Z" level=error msg="Failed to destroy network for sandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:50.574337 containerd[2007]: time="2024-08-05T21:36:50.574109906Z" level=error msg="encountered an error cleaning up failed sandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:50.574337 containerd[2007]: time="2024-08-05T21:36:50.574258490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rk7q,Uid:8fc31815-4413-45c7-b4f1-d969a93d2abe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:50.575438 kubelet[3485]: E0805 21:36:50.575358 3485 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:50.575682 kubelet[3485]: E0805 21:36:50.575490 3485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6rk7q" Aug 5 21:36:50.575682 kubelet[3485]: E0805 21:36:50.575529 3485 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6rk7q" Aug 5 21:36:50.575682 kubelet[3485]: E0805 21:36:50.575624 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6rk7q_calico-system(8fc31815-4413-45c7-b4f1-d969a93d2abe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6rk7q_calico-system(8fc31815-4413-45c7-b4f1-d969a93d2abe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6rk7q" podUID="8fc31815-4413-45c7-b4f1-d969a93d2abe" Aug 5 21:36:50.821542 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e-shm.mount: Deactivated successfully. Aug 5 21:36:50.904877 containerd[2007]: time="2024-08-05T21:36:50.904613860Z" level=info msg="shim disconnected" id=3c62d6e6994af6cf416b3ac634e8dce7d99298f041ab2b12d9aa16df19e8fa4c namespace=k8s.io Aug 5 21:36:50.909499 containerd[2007]: time="2024-08-05T21:36:50.907564996Z" level=warning msg="cleaning up after shim disconnected" id=3c62d6e6994af6cf416b3ac634e8dce7d99298f041ab2b12d9aa16df19e8fa4c namespace=k8s.io Aug 5 21:36:50.909499 containerd[2007]: time="2024-08-05T21:36:50.908534272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:36:51.231571 containerd[2007]: time="2024-08-05T21:36:51.230537834Z" level=error msg="Failed to destroy network for sandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.232182 containerd[2007]: time="2024-08-05T21:36:51.231834278Z" level=error msg="encountered an error cleaning up failed sandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.232706 containerd[2007]: time="2024-08-05T21:36:51.232589030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-mffms,Uid:39e5d80d-eef5-4d09-a308-7bc48d19044c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.233172 kubelet[3485]: E0805 21:36:51.233137 3485 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.235421 kubelet[3485]: E0805 21:36:51.233683 3485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-mffms" Aug 5 21:36:51.235421 kubelet[3485]: E0805 21:36:51.233763 3485 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-mffms" Aug 5 21:36:51.235421 kubelet[3485]: E0805 21:36:51.234701 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-mffms_kube-system(39e5d80d-eef5-4d09-a308-7bc48d19044c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-mffms_kube-system(39e5d80d-eef5-4d09-a308-7bc48d19044c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-mffms" podUID="39e5d80d-eef5-4d09-a308-7bc48d19044c" Aug 5 21:36:51.248415 containerd[2007]: time="2024-08-05T21:36:51.247010510Z" level=error msg="Failed to destroy network for sandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.248586 containerd[2007]: time="2024-08-05T21:36:51.248378090Z" level=error msg="encountered an error cleaning up failed sandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.248586 containerd[2007]: time="2024-08-05T21:36:51.248561942Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pkkts,Uid:a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.249166 kubelet[3485]: E0805 21:36:51.249089 3485 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.249469 kubelet[3485]: E0805 21:36:51.249227 3485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-pkkts" Aug 5 21:36:51.249469 kubelet[3485]: E0805 21:36:51.249280 3485 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-pkkts" Aug 5 21:36:51.252251 kubelet[3485]: E0805 21:36:51.250205 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-pkkts_kube-system(a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-pkkts_kube-system(a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-pkkts" podUID="a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48" Aug 5 21:36:51.254935 containerd[2007]: time="2024-08-05T21:36:51.254865278Z" level=error msg="Failed to destroy network for sandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.256259 containerd[2007]: time="2024-08-05T21:36:51.256168658Z" level=error msg="encountered an error cleaning up failed sandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.256718 containerd[2007]: time="2024-08-05T21:36:51.256324658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d79b48bbd-jq6mz,Uid:85d380da-4d3c-44a8-b1e4-555530171664,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.258033 kubelet[3485]: E0805 21:36:51.257938 3485 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.258356 kubelet[3485]: E0805 21:36:51.258113 3485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d79b48bbd-jq6mz" Aug 5 21:36:51.259618 kubelet[3485]: E0805 21:36:51.258915 3485 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d79b48bbd-jq6mz" Aug 5 21:36:51.260512 kubelet[3485]: E0805 21:36:51.260129 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d79b48bbd-jq6mz_calico-system(85d380da-4d3c-44a8-b1e4-555530171664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d79b48bbd-jq6mz_calico-system(85d380da-4d3c-44a8-b1e4-555530171664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d79b48bbd-jq6mz" podUID="85d380da-4d3c-44a8-b1e4-555530171664" Aug 5 21:36:51.392076 kubelet[3485]: I0805 21:36:51.392020 3485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:36:51.394201 containerd[2007]: time="2024-08-05T21:36:51.394112763Z" level=info msg="StopPodSandbox for \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\"" Aug 5 21:36:51.396436 containerd[2007]: time="2024-08-05T21:36:51.396252555Z" level=info msg="Ensure that sandbox 48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88 in task-service has been cleanup successfully" Aug 5 21:36:51.400923 kubelet[3485]: I0805 21:36:51.400862 3485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:36:51.405392 containerd[2007]: time="2024-08-05T21:36:51.405054759Z" level=info msg="StopPodSandbox for \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\"" Aug 5 21:36:51.408505 containerd[2007]: time="2024-08-05T21:36:51.408396051Z" level=info msg="Ensure that sandbox a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf in task-service has been cleanup successfully" Aug 5 21:36:51.435383 containerd[2007]: time="2024-08-05T21:36:51.435142491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 21:36:51.437286 kubelet[3485]: I0805 21:36:51.437248 3485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:36:51.447800 containerd[2007]: time="2024-08-05T21:36:51.447609723Z" level=info msg="StopPodSandbox for \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\"" Aug 5 21:36:51.448956 containerd[2007]: time="2024-08-05T21:36:51.448826679Z" level=info msg="Ensure that sandbox 621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0 in task-service has been cleanup successfully" Aug 5 21:36:51.458903 kubelet[3485]: I0805 21:36:51.457090 3485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:36:51.462447 containerd[2007]: time="2024-08-05T21:36:51.460964355Z" level=info msg="StopPodSandbox for \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\"" Aug 5 21:36:51.469712 containerd[2007]: time="2024-08-05T21:36:51.468532023Z" level=info msg="Ensure that sandbox b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e in task-service has been cleanup successfully" Aug 5 21:36:51.587068 containerd[2007]: time="2024-08-05T21:36:51.586202043Z" level=error msg="StopPodSandbox for \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\" failed" error="failed to destroy network for sandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.587585 kubelet[3485]: E0805 21:36:51.586657 3485 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:36:51.587585 kubelet[3485]: E0805 21:36:51.586762 3485 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88"} Aug 5 21:36:51.588710 kubelet[3485]: E0805 21:36:51.587468 3485 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39e5d80d-eef5-4d09-a308-7bc48d19044c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:36:51.588710 kubelet[3485]: E0805 21:36:51.588612 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39e5d80d-eef5-4d09-a308-7bc48d19044c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-mffms" podUID="39e5d80d-eef5-4d09-a308-7bc48d19044c" Aug 5 21:36:51.618182 containerd[2007]: time="2024-08-05T21:36:51.617681200Z" level=error msg="StopPodSandbox for \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\" failed" error="failed to destroy network for sandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.619386 kubelet[3485]: E0805 21:36:51.618956 3485 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:36:51.619386 kubelet[3485]: E0805 21:36:51.619093 3485 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf"} Aug 5 21:36:51.619386 kubelet[3485]: E0805 21:36:51.619199 3485 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:36:51.620514 kubelet[3485]: E0805 21:36:51.619274 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-pkkts" podUID="a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48" Aug 5 21:36:51.627317 containerd[2007]: time="2024-08-05T21:36:51.627214672Z" level=error msg="StopPodSandbox for \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\" failed" error="failed to destroy network for sandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.628155 kubelet[3485]: E0805 21:36:51.627799 3485 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:36:51.628155 kubelet[3485]: E0805 21:36:51.627862 3485 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0"} Aug 5 21:36:51.628155 kubelet[3485]: E0805 21:36:51.627927 3485 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85d380da-4d3c-44a8-b1e4-555530171664\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:36:51.628155 kubelet[3485]: E0805 21:36:51.628024 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85d380da-4d3c-44a8-b1e4-555530171664\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d79b48bbd-jq6mz" podUID="85d380da-4d3c-44a8-b1e4-555530171664" Aug 5 21:36:51.635781 containerd[2007]: time="2024-08-05T21:36:51.635601232Z" level=error msg="StopPodSandbox for \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\" failed" error="failed to destroy network for sandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:36:51.636967 kubelet[3485]: E0805 21:36:51.636615 3485 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:36:51.636967 kubelet[3485]: E0805 21:36:51.636738 3485 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e"} Aug 5 21:36:51.636967 kubelet[3485]: E0805 21:36:51.636847 3485 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8fc31815-4413-45c7-b4f1-d969a93d2abe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:36:51.636967 kubelet[3485]: E0805 21:36:51.636913 3485 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8fc31815-4413-45c7-b4f1-d969a93d2abe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6rk7q" podUID="8fc31815-4413-45c7-b4f1-d969a93d2abe" Aug 5 21:36:51.819775 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf-shm.mount: Deactivated successfully. Aug 5 21:36:51.820000 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0-shm.mount: Deactivated successfully. Aug 5 21:36:51.820153 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88-shm.mount: Deactivated successfully. Aug 5 21:36:58.925145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419543207.mount: Deactivated successfully. Aug 5 21:36:58.995158 containerd[2007]: time="2024-08-05T21:36:58.994982040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:58.997459 containerd[2007]: time="2024-08-05T21:36:58.997112676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Aug 5 21:36:59.000322 containerd[2007]: time="2024-08-05T21:36:59.000017900Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:59.009053 containerd[2007]: time="2024-08-05T21:36:59.008834816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:59.012254 containerd[2007]: time="2024-08-05T21:36:59.010754264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 7.574942737s" Aug 5 21:36:59.012254 containerd[2007]: time="2024-08-05T21:36:59.010844408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Aug 5 21:36:59.043123 containerd[2007]: time="2024-08-05T21:36:59.042978057Z" level=info msg="CreateContainer within sandbox \"6344f162600e6a4aaff614367cb06a3ae5fdb1af9244a511e6f09c8c66b0d06c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 21:36:59.090284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820958709.mount: Deactivated successfully. Aug 5 21:36:59.090729 containerd[2007]: time="2024-08-05T21:36:59.090501549Z" level=info msg="CreateContainer within sandbox \"6344f162600e6a4aaff614367cb06a3ae5fdb1af9244a511e6f09c8c66b0d06c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"21338728b6edd0213be0863ed88eb531c7bcca089e7ac8e35a0882ee77bce974\"" Aug 5 21:36:59.096761 containerd[2007]: time="2024-08-05T21:36:59.092630073Z" level=info msg="StartContainer for \"21338728b6edd0213be0863ed88eb531c7bcca089e7ac8e35a0882ee77bce974\"" Aug 5 21:36:59.169924 systemd[1]: Started cri-containerd-21338728b6edd0213be0863ed88eb531c7bcca089e7ac8e35a0882ee77bce974.scope - libcontainer container 21338728b6edd0213be0863ed88eb531c7bcca089e7ac8e35a0882ee77bce974. Aug 5 21:36:59.261716 containerd[2007]: time="2024-08-05T21:36:59.260676058Z" level=info msg="StartContainer for \"21338728b6edd0213be0863ed88eb531c7bcca089e7ac8e35a0882ee77bce974\" returns successfully" Aug 5 21:36:59.418388 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 21:36:59.419111 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 21:36:59.534133 kubelet[3485]: I0805 21:36:59.533807 3485 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-rk4jp" podStartSLOduration=1.9383653490000001 podCreationTimestamp="2024-08-05 21:36:37 +0000 UTC" firstStartedPulling="2024-08-05 21:36:38.415986554 +0000 UTC m=+21.758630017" lastFinishedPulling="2024-08-05 21:36:59.011332784 +0000 UTC m=+42.353976223" observedRunningTime="2024-08-05 21:36:59.533194115 +0000 UTC m=+42.875837590" watchObservedRunningTime="2024-08-05 21:36:59.533711555 +0000 UTC m=+42.876355006" Aug 5 21:37:00.551451 kubelet[3485]: I0805 21:37:00.551036 3485 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:37:03.140926 systemd[1]: Started sshd@7-172.31.17.56:22-139.178.68.195:55018.service - OpenSSH per-connection server daemon (139.178.68.195:55018). Aug 5 21:37:03.375164 sshd[4631]: Accepted publickey for core from 139.178.68.195 port 55018 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:03.381607 sshd[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:03.400505 systemd-logind[1994]: New session 8 of user core. Aug 5 21:37:03.409551 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 21:37:03.743300 sshd[4631]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:03.756265 systemd[1]: sshd@7-172.31.17.56:22-139.178.68.195:55018.service: Deactivated successfully. Aug 5 21:37:03.764051 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 21:37:03.767979 systemd-logind[1994]: Session 8 logged out. Waiting for processes to exit. Aug 5 21:37:03.771951 systemd-logind[1994]: Removed session 8. Aug 5 21:37:04.065420 containerd[2007]: time="2024-08-05T21:37:04.063735253Z" level=info msg="StopPodSandbox for \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\"" Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.205 [INFO][4687] k8s.go 608: Cleaning up netns ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.206 [INFO][4687] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" iface="eth0" netns="/var/run/netns/cni-5b4647de-e260-af83-f065-f57e3959224c" Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.207 [INFO][4687] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" iface="eth0" netns="/var/run/netns/cni-5b4647de-e260-af83-f065-f57e3959224c" Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.207 [INFO][4687] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" iface="eth0" netns="/var/run/netns/cni-5b4647de-e260-af83-f065-f57e3959224c" Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.207 [INFO][4687] k8s.go 615: Releasing IP address(es) ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.207 [INFO][4687] utils.go 188: Calico CNI releasing IP address ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.275 [INFO][4694] ipam_plugin.go 411: Releasing address using handleID ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" HandleID="k8s-pod-network.48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.276 [INFO][4694] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.276 [INFO][4694] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.299 [WARNING][4694] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" HandleID="k8s-pod-network.48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.299 [INFO][4694] ipam_plugin.go 439: Releasing address using workloadID ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" HandleID="k8s-pod-network.48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.304 [INFO][4694] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:04.314570 containerd[2007]: 2024-08-05 21:37:04.308 [INFO][4687] k8s.go 621: Teardown processing complete. ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:37:04.321836 containerd[2007]: time="2024-08-05T21:37:04.321038079Z" level=info msg="TearDown network for sandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\" successfully" Aug 5 21:37:04.321836 containerd[2007]: time="2024-08-05T21:37:04.321121731Z" level=info msg="StopPodSandbox for \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\" returns successfully" Aug 5 21:37:04.324621 containerd[2007]: time="2024-08-05T21:37:04.322177263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-mffms,Uid:39e5d80d-eef5-4d09-a308-7bc48d19044c,Namespace:kube-system,Attempt:1,}" Aug 5 21:37:04.329077 systemd[1]: run-netns-cni\x2d5b4647de\x2de260\x2daf83\x2df065\x2df57e3959224c.mount: Deactivated successfully. Aug 5 21:37:04.850153 systemd-networkd[1845]: cali041c8dbd672: Link UP Aug 5 21:37:04.864769 systemd-networkd[1845]: cali041c8dbd672: Gained carrier Aug 5 21:37:04.881866 (udev-worker)[4754]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.478 [INFO][4711] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0 coredns-5dd5756b68- kube-system 39e5d80d-eef5-4d09-a308-7bc48d19044c 755 0 2024-08-05 21:36:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-56 coredns-5dd5756b68-mffms eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali041c8dbd672 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" Namespace="kube-system" Pod="coredns-5dd5756b68-mffms" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-" Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.478 [INFO][4711] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" Namespace="kube-system" Pod="coredns-5dd5756b68-mffms" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.667 [INFO][4730] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" HandleID="k8s-pod-network.7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.710 [INFO][4730] ipam_plugin.go 264: Auto assigning IP ContainerID="7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" HandleID="k8s-pod-network.7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004f77c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-56", "pod":"coredns-5dd5756b68-mffms", "timestamp":"2024-08-05 21:37:04.667317952 +0000 UTC"}, Hostname:"ip-172-31-17-56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.710 [INFO][4730] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.710 [INFO][4730] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.710 [INFO][4730] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-56' Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.717 [INFO][4730] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" host="ip-172-31-17-56" Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.733 [INFO][4730] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-56" Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.746 [INFO][4730] ipam.go 489: Trying affinity for 192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.753 [INFO][4730] ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.763 [INFO][4730] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.763 [INFO][4730] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" host="ip-172-31-17-56" Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.770 [INFO][4730] ipam.go 1685: Creating new handle: k8s-pod-network.7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.784 [INFO][4730] ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" host="ip-172-31-17-56" Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.802 [INFO][4730] ipam.go 1216: Successfully claimed IPs: [192.168.115.129/26] block=192.168.115.128/26 handle="k8s-pod-network.7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" host="ip-172-31-17-56" Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.802 [INFO][4730] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.129/26] handle="k8s-pod-network.7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" host="ip-172-31-17-56" Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.802 [INFO][4730] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:04.983713 containerd[2007]: 2024-08-05 21:37:04.803 [INFO][4730] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.115.129/26] IPv6=[] ContainerID="7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" HandleID="k8s-pod-network.7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:04.987230 containerd[2007]: 2024-08-05 21:37:04.811 [INFO][4711] k8s.go 386: Populated endpoint ContainerID="7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" Namespace="kube-system" Pod="coredns-5dd5756b68-mffms" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"39e5d80d-eef5-4d09-a308-7bc48d19044c", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"", Pod:"coredns-5dd5756b68-mffms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali041c8dbd672", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:04.987230 containerd[2007]: 2024-08-05 21:37:04.814 [INFO][4711] k8s.go 387: Calico CNI using IPs: [192.168.115.129/32] ContainerID="7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" Namespace="kube-system" Pod="coredns-5dd5756b68-mffms" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:04.987230 containerd[2007]: 2024-08-05 21:37:04.814 [INFO][4711] dataplane_linux.go 68: Setting the host side veth name to cali041c8dbd672 ContainerID="7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" Namespace="kube-system" Pod="coredns-5dd5756b68-mffms" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:04.987230 containerd[2007]: 2024-08-05 21:37:04.862 [INFO][4711] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" Namespace="kube-system" Pod="coredns-5dd5756b68-mffms" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:04.987230 containerd[2007]: 2024-08-05 21:37:04.875 [INFO][4711] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" Namespace="kube-system" Pod="coredns-5dd5756b68-mffms" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"39e5d80d-eef5-4d09-a308-7bc48d19044c", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba", Pod:"coredns-5dd5756b68-mffms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali041c8dbd672", MAC:"2a:27:3f:21:b7:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:04.987230 containerd[2007]: 2024-08-05 21:37:04.975 [INFO][4711] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba" Namespace="kube-system" Pod="coredns-5dd5756b68-mffms" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:05.074427 containerd[2007]: time="2024-08-05T21:37:05.072712154Z" level=info msg="StopPodSandbox for \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\"" Aug 5 21:37:05.168932 containerd[2007]: time="2024-08-05T21:37:05.167009991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:05.168932 containerd[2007]: time="2024-08-05T21:37:05.167146887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:05.168932 containerd[2007]: time="2024-08-05T21:37:05.167180499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:05.168932 containerd[2007]: time="2024-08-05T21:37:05.167215839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:05.308418 systemd[1]: Started cri-containerd-7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba.scope - libcontainer container 7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba. Aug 5 21:37:05.584039 systemd-networkd[1845]: vxlan.calico: Link UP Aug 5 21:37:05.584085 systemd-networkd[1845]: vxlan.calico: Gained carrier Aug 5 21:37:05.587183 (udev-worker)[4753]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:37:05.592926 containerd[2007]: time="2024-08-05T21:37:05.592463537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-mffms,Uid:39e5d80d-eef5-4d09-a308-7bc48d19044c,Namespace:kube-system,Attempt:1,} returns sandbox id \"7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba\"" Aug 5 21:37:05.612508 containerd[2007]: time="2024-08-05T21:37:05.611844125Z" level=info msg="CreateContainer within sandbox \"7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:37:05.687929 containerd[2007]: time="2024-08-05T21:37:05.687451758Z" level=info msg="CreateContainer within sandbox \"7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cd5997ff2f80b07cc07b0b86d7f22d8ecb511e6c3ce541c7e8172312d4a1c45\"" Aug 5 21:37:05.692203 containerd[2007]: time="2024-08-05T21:37:05.691684314Z" level=info msg="StartContainer for \"7cd5997ff2f80b07cc07b0b86d7f22d8ecb511e6c3ce541c7e8172312d4a1c45\"" Aug 5 21:37:05.892336 systemd[1]: Started cri-containerd-7cd5997ff2f80b07cc07b0b86d7f22d8ecb511e6c3ce541c7e8172312d4a1c45.scope - libcontainer container 7cd5997ff2f80b07cc07b0b86d7f22d8ecb511e6c3ce541c7e8172312d4a1c45. Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.628 [INFO][4791] k8s.go 608: Cleaning up netns ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.632 [INFO][4791] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" iface="eth0" netns="/var/run/netns/cni-9aaaa42e-df11-8109-1836-32827cb9d1b0" Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.632 [INFO][4791] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" iface="eth0" netns="/var/run/netns/cni-9aaaa42e-df11-8109-1836-32827cb9d1b0" Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.638 [INFO][4791] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" iface="eth0" netns="/var/run/netns/cni-9aaaa42e-df11-8109-1836-32827cb9d1b0" Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.638 [INFO][4791] k8s.go 615: Releasing IP address(es) ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.638 [INFO][4791] utils.go 188: Calico CNI releasing IP address ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.787 [INFO][4830] ipam_plugin.go 411: Releasing address using handleID ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" HandleID="k8s-pod-network.a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.787 [INFO][4830] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.787 [INFO][4830] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.839 [WARNING][4830] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" HandleID="k8s-pod-network.a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.839 [INFO][4830] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" HandleID="k8s-pod-network.a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.862 [INFO][4830] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:05.902101 containerd[2007]: 2024-08-05 21:37:05.886 [INFO][4791] k8s.go 621: Teardown processing complete. ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:37:05.904024 containerd[2007]: time="2024-08-05T21:37:05.902849035Z" level=info msg="TearDown network for sandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\" successfully" Aug 5 21:37:05.904024 containerd[2007]: time="2024-08-05T21:37:05.902909971Z" level=info msg="StopPodSandbox for \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\" returns successfully" Aug 5 21:37:05.910507 containerd[2007]: time="2024-08-05T21:37:05.907614631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pkkts,Uid:a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48,Namespace:kube-system,Attempt:1,}" Aug 5 21:37:06.070587 containerd[2007]: time="2024-08-05T21:37:06.069781791Z" level=info msg="StopPodSandbox for \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\"" Aug 5 21:37:06.074266 containerd[2007]: time="2024-08-05T21:37:06.069783375Z" level=info msg="StopPodSandbox for \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\"" Aug 5 21:37:06.219147 containerd[2007]: time="2024-08-05T21:37:06.217042888Z" level=info msg="StartContainer for \"7cd5997ff2f80b07cc07b0b86d7f22d8ecb511e6c3ce541c7e8172312d4a1c45\" returns successfully" Aug 5 21:37:06.631560 systemd-networkd[1845]: vxlan.calico: Gained IPv6LL Aug 5 21:37:06.687478 systemd[1]: run-netns-cni\x2d9aaaa42e\x2ddf11\x2d8109\x2d1836\x2d32827cb9d1b0.mount: Deactivated successfully. Aug 5 21:37:06.825071 systemd-networkd[1845]: cali041c8dbd672: Gained IPv6LL Aug 5 21:37:06.951133 systemd-networkd[1845]: caliad10c325e39: Link UP Aug 5 21:37:06.954761 systemd-networkd[1845]: caliad10c325e39: Gained carrier Aug 5 21:37:07.009133 kubelet[3485]: I0805 21:37:07.005812 3485 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mffms" podStartSLOduration=38.005715676 podCreationTimestamp="2024-08-05 21:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:37:06.69187653 +0000 UTC m=+50.034519969" watchObservedRunningTime="2024-08-05 21:37:07.005715676 +0000 UTC m=+50.348359103" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.278 [INFO][4865] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0 coredns-5dd5756b68- kube-system a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48 767 0 2024-08-05 21:36:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-56 coredns-5dd5756b68-pkkts eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad10c325e39 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" Namespace="kube-system" Pod="coredns-5dd5756b68-pkkts" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.284 [INFO][4865] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" Namespace="kube-system" Pod="coredns-5dd5756b68-pkkts" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.557 [INFO][4936] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" HandleID="k8s-pod-network.4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.747 [INFO][4936] ipam_plugin.go 264: Auto assigning IP ContainerID="4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" HandleID="k8s-pod-network.4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001ce250), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-56", "pod":"coredns-5dd5756b68-pkkts", "timestamp":"2024-08-05 21:37:06.557144154 +0000 UTC"}, Hostname:"ip-172-31-17-56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.747 [INFO][4936] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.747 [INFO][4936] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.747 [INFO][4936] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-56' Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.754 [INFO][4936] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" host="ip-172-31-17-56" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.781 [INFO][4936] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-56" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.808 [INFO][4936] ipam.go 489: Trying affinity for 192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.828 [INFO][4936] ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.886 [INFO][4936] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.887 [INFO][4936] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" host="ip-172-31-17-56" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.893 [INFO][4936] ipam.go 1685: Creating new handle: k8s-pod-network.4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58 Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.911 [INFO][4936] ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" host="ip-172-31-17-56" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.934 [INFO][4936] ipam.go 1216: Successfully claimed IPs: [192.168.115.130/26] block=192.168.115.128/26 handle="k8s-pod-network.4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" host="ip-172-31-17-56" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.934 [INFO][4936] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.130/26] handle="k8s-pod-network.4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" host="ip-172-31-17-56" Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.934 [INFO][4936] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:07.021417 containerd[2007]: 2024-08-05 21:37:06.934 [INFO][4936] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.115.130/26] IPv6=[] ContainerID="4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" HandleID="k8s-pod-network.4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:07.027513 containerd[2007]: 2024-08-05 21:37:06.940 [INFO][4865] k8s.go 386: Populated endpoint ContainerID="4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" Namespace="kube-system" Pod="coredns-5dd5756b68-pkkts" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"", Pod:"coredns-5dd5756b68-pkkts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad10c325e39", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:07.027513 containerd[2007]: 2024-08-05 21:37:06.941 [INFO][4865] k8s.go 387: Calico CNI using IPs: [192.168.115.130/32] ContainerID="4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" Namespace="kube-system" Pod="coredns-5dd5756b68-pkkts" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:07.027513 containerd[2007]: 2024-08-05 21:37:06.941 [INFO][4865] dataplane_linux.go 68: Setting the host side veth name to caliad10c325e39 ContainerID="4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" Namespace="kube-system" Pod="coredns-5dd5756b68-pkkts" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:07.027513 containerd[2007]: 2024-08-05 21:37:06.953 [INFO][4865] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" Namespace="kube-system" Pod="coredns-5dd5756b68-pkkts" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:07.027513 containerd[2007]: 2024-08-05 21:37:06.955 [INFO][4865] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" Namespace="kube-system" Pod="coredns-5dd5756b68-pkkts" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58", Pod:"coredns-5dd5756b68-pkkts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad10c325e39", MAC:"7e:ab:ee:0b:ff:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:07.027513 containerd[2007]: 2024-08-05 21:37:07.012 [INFO][4865] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58" Namespace="kube-system" Pod="coredns-5dd5756b68-pkkts" WorkloadEndpoint="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:06.626 [INFO][4930] k8s.go 608: Cleaning up netns ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:06.627 [INFO][4930] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" iface="eth0" netns="/var/run/netns/cni-da9b7f21-9d5c-d936-8955-2f778f83e5ca" Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:06.628 [INFO][4930] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" iface="eth0" netns="/var/run/netns/cni-da9b7f21-9d5c-d936-8955-2f778f83e5ca" Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:06.631 [INFO][4930] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" iface="eth0" netns="/var/run/netns/cni-da9b7f21-9d5c-d936-8955-2f778f83e5ca" Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:06.631 [INFO][4930] k8s.go 615: Releasing IP address(es) ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:06.631 [INFO][4930] utils.go 188: Calico CNI releasing IP address ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:06.804 [INFO][4956] ipam_plugin.go 411: Releasing address using handleID ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" HandleID="k8s-pod-network.b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Workload="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:06.805 [INFO][4956] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:06.935 [INFO][4956] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:06.986 [WARNING][4956] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" HandleID="k8s-pod-network.b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Workload="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:06.988 [INFO][4956] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" HandleID="k8s-pod-network.b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Workload="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:07.015 [INFO][4956] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:07.042693 containerd[2007]: 2024-08-05 21:37:07.030 [INFO][4930] k8s.go 621: Teardown processing complete. ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:37:07.063277 systemd[1]: run-netns-cni\x2dda9b7f21\x2d9d5c\x2dd936\x2d8955\x2d2f778f83e5ca.mount: Deactivated successfully. Aug 5 21:37:07.073862 containerd[2007]: time="2024-08-05T21:37:07.072328024Z" level=info msg="TearDown network for sandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\" successfully" Aug 5 21:37:07.073862 containerd[2007]: time="2024-08-05T21:37:07.072939448Z" level=info msg="StopPodSandbox for \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\" returns successfully" Aug 5 21:37:07.079699 containerd[2007]: time="2024-08-05T21:37:07.079302520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rk7q,Uid:8fc31815-4413-45c7-b4f1-d969a93d2abe,Namespace:calico-system,Attempt:1,}" Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:06.660 [INFO][4909] k8s.go 608: Cleaning up netns ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:06.665 [INFO][4909] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" iface="eth0" netns="/var/run/netns/cni-ffe1bf62-7e04-a832-7c76-5acaaf303a3c" Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:06.666 [INFO][4909] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" iface="eth0" netns="/var/run/netns/cni-ffe1bf62-7e04-a832-7c76-5acaaf303a3c" Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:06.666 [INFO][4909] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" iface="eth0" netns="/var/run/netns/cni-ffe1bf62-7e04-a832-7c76-5acaaf303a3c" Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:06.671 [INFO][4909] k8s.go 615: Releasing IP address(es) ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:06.674 [INFO][4909] utils.go 188: Calico CNI releasing IP address ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:06.897 [INFO][4960] ipam_plugin.go 411: Releasing address using handleID ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" HandleID="k8s-pod-network.621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Workload="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:06.902 [INFO][4960] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:07.015 [INFO][4960] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:07.077 [WARNING][4960] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" HandleID="k8s-pod-network.621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Workload="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:07.079 [INFO][4960] ipam_plugin.go 439: Releasing address using workloadID ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" HandleID="k8s-pod-network.621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Workload="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:07.092 [INFO][4960] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:07.116051 containerd[2007]: 2024-08-05 21:37:07.103 [INFO][4909] k8s.go 621: Teardown processing complete. ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:37:07.131538 containerd[2007]: time="2024-08-05T21:37:07.116317373Z" level=info msg="TearDown network for sandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\" successfully" Aug 5 21:37:07.131538 containerd[2007]: time="2024-08-05T21:37:07.116401577Z" level=info msg="StopPodSandbox for \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\" returns successfully" Aug 5 21:37:07.135148 systemd[1]: run-netns-cni\x2dffe1bf62\x2d7e04\x2da832\x2d7c76\x2d5acaaf303a3c.mount: Deactivated successfully. Aug 5 21:37:07.137872 containerd[2007]: time="2024-08-05T21:37:07.137783669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d79b48bbd-jq6mz,Uid:85d380da-4d3c-44a8-b1e4-555530171664,Namespace:calico-system,Attempt:1,}" Aug 5 21:37:07.194359 containerd[2007]: time="2024-08-05T21:37:07.191812253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:07.194359 containerd[2007]: time="2024-08-05T21:37:07.191971997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:07.194359 containerd[2007]: time="2024-08-05T21:37:07.192018569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:07.194359 containerd[2007]: time="2024-08-05T21:37:07.192076673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:07.293766 systemd[1]: Started cri-containerd-4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58.scope - libcontainer container 4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58. Aug 5 21:37:07.540168 containerd[2007]: time="2024-08-05T21:37:07.538692043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pkkts,Uid:a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48,Namespace:kube-system,Attempt:1,} returns sandbox id \"4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58\"" Aug 5 21:37:07.568616 containerd[2007]: time="2024-08-05T21:37:07.568237051Z" level=info msg="CreateContainer within sandbox \"4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:37:07.720556 containerd[2007]: time="2024-08-05T21:37:07.720455396Z" level=info msg="CreateContainer within sandbox \"4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c4e173e06c152399420c782aecb76233e52a89a02d5d2f6c20e77c189f57fc9\"" Aug 5 21:37:07.722692 containerd[2007]: time="2024-08-05T21:37:07.722578232Z" level=info msg="StartContainer for \"0c4e173e06c152399420c782aecb76233e52a89a02d5d2f6c20e77c189f57fc9\"" Aug 5 21:37:08.009063 systemd[1]: Started cri-containerd-0c4e173e06c152399420c782aecb76233e52a89a02d5d2f6c20e77c189f57fc9.scope - libcontainer container 0c4e173e06c152399420c782aecb76233e52a89a02d5d2f6c20e77c189f57fc9. Aug 5 21:37:08.101576 systemd-networkd[1845]: cali9adb65507e2: Link UP Aug 5 21:37:08.122006 systemd-networkd[1845]: cali9adb65507e2: Gained carrier Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.473 [INFO][5030] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0 calico-kube-controllers-6d79b48bbd- calico-system 85d380da-4d3c-44a8-b1e4-555530171664 777 0 2024-08-05 21:36:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d79b48bbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-56 calico-kube-controllers-6d79b48bbd-jq6mz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9adb65507e2 [] []}} ContainerID="0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" Namespace="calico-system" Pod="calico-kube-controllers-6d79b48bbd-jq6mz" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-" Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.476 [INFO][5030] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" Namespace="calico-system" Pod="calico-kube-controllers-6d79b48bbd-jq6mz" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.790 [INFO][5067] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" HandleID="k8s-pod-network.0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" Workload="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.849 [INFO][5067] ipam_plugin.go 264: Auto assigning IP ContainerID="0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" HandleID="k8s-pod-network.0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" Workload="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000391ec0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-56", "pod":"calico-kube-controllers-6d79b48bbd-jq6mz", "timestamp":"2024-08-05 21:37:07.790866296 +0000 UTC"}, Hostname:"ip-172-31-17-56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.849 [INFO][5067] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.849 [INFO][5067] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.849 [INFO][5067] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-56' Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.862 [INFO][5067] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" host="ip-172-31-17-56" Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.901 [INFO][5067] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-56" Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.954 [INFO][5067] ipam.go 489: Trying affinity for 192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.962 [INFO][5067] ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.995 [INFO][5067] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:07.997 [INFO][5067] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" host="ip-172-31-17-56" Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:08.007 [INFO][5067] ipam.go 1685: Creating new handle: k8s-pod-network.0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254 Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:08.025 [INFO][5067] ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" host="ip-172-31-17-56" Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:08.050 [INFO][5067] ipam.go 1216: Successfully claimed IPs: [192.168.115.131/26] block=192.168.115.128/26 handle="k8s-pod-network.0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" host="ip-172-31-17-56" Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:08.050 [INFO][5067] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.131/26] handle="k8s-pod-network.0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" host="ip-172-31-17-56" Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:08.051 [INFO][5067] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:08.202203 containerd[2007]: 2024-08-05 21:37:08.055 [INFO][5067] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.115.131/26] IPv6=[] ContainerID="0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" HandleID="k8s-pod-network.0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" Workload="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:08.205503 containerd[2007]: 2024-08-05 21:37:08.061 [INFO][5030] k8s.go 386: Populated endpoint ContainerID="0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" Namespace="calico-system" Pod="calico-kube-controllers-6d79b48bbd-jq6mz" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0", GenerateName:"calico-kube-controllers-6d79b48bbd-", Namespace:"calico-system", SelfLink:"", UID:"85d380da-4d3c-44a8-b1e4-555530171664", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d79b48bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"", Pod:"calico-kube-controllers-6d79b48bbd-jq6mz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9adb65507e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:08.205503 containerd[2007]: 2024-08-05 21:37:08.062 [INFO][5030] k8s.go 387: Calico CNI using IPs: [192.168.115.131/32] ContainerID="0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" Namespace="calico-system" Pod="calico-kube-controllers-6d79b48bbd-jq6mz" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:08.205503 containerd[2007]: 2024-08-05 21:37:08.062 [INFO][5030] dataplane_linux.go 68: Setting the host side veth name to cali9adb65507e2 ContainerID="0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" Namespace="calico-system" Pod="calico-kube-controllers-6d79b48bbd-jq6mz" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:08.205503 containerd[2007]: 2024-08-05 21:37:08.117 [INFO][5030] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" Namespace="calico-system" Pod="calico-kube-controllers-6d79b48bbd-jq6mz" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:08.205503 containerd[2007]: 2024-08-05 21:37:08.120 [INFO][5030] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" Namespace="calico-system" Pod="calico-kube-controllers-6d79b48bbd-jq6mz" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0", GenerateName:"calico-kube-controllers-6d79b48bbd-", Namespace:"calico-system", SelfLink:"", UID:"85d380da-4d3c-44a8-b1e4-555530171664", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d79b48bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254", Pod:"calico-kube-controllers-6d79b48bbd-jq6mz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9adb65507e2", MAC:"e6:2e:40:aa:11:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:08.205503 containerd[2007]: 2024-08-05 21:37:08.178 [INFO][5030] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254" Namespace="calico-system" Pod="calico-kube-controllers-6d79b48bbd-jq6mz" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:08.217800 containerd[2007]: time="2024-08-05T21:37:08.217339254Z" level=info msg="StartContainer for \"0c4e173e06c152399420c782aecb76233e52a89a02d5d2f6c20e77c189f57fc9\" returns successfully" Aug 5 21:37:08.316736 systemd-networkd[1845]: cali6714d5e7b8b: Link UP Aug 5 21:37:08.317319 systemd-networkd[1845]: cali6714d5e7b8b: Gained carrier Aug 5 21:37:08.359745 systemd-networkd[1845]: caliad10c325e39: Gained IPv6LL Aug 5 21:37:08.388840 containerd[2007]: time="2024-08-05T21:37:08.383593039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:08.388840 containerd[2007]: time="2024-08-05T21:37:08.386208907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:08.388840 containerd[2007]: time="2024-08-05T21:37:08.386662987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:08.388840 containerd[2007]: time="2024-08-05T21:37:08.386717695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:07.513 [INFO][5008] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0 csi-node-driver- calico-system 8fc31815-4413-45c7-b4f1-d969a93d2abe 778 0 2024-08-05 21:36:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-17-56 csi-node-driver-6rk7q eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali6714d5e7b8b [] []}} ContainerID="8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" Namespace="calico-system" Pod="csi-node-driver-6rk7q" WorkloadEndpoint="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-" Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:07.513 [INFO][5008] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" Namespace="calico-system" Pod="csi-node-driver-6rk7q" WorkloadEndpoint="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:07.914 [INFO][5071] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" HandleID="k8s-pod-network.8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" Workload="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.019 [INFO][5071] ipam_plugin.go 264: Auto assigning IP ContainerID="8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" HandleID="k8s-pod-network.8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" Workload="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000335900), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-56", "pod":"csi-node-driver-6rk7q", "timestamp":"2024-08-05 21:37:07.914009829 +0000 UTC"}, Hostname:"ip-172-31-17-56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.020 [INFO][5071] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.053 [INFO][5071] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.056 [INFO][5071] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-56' Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.064 [INFO][5071] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" host="ip-172-31-17-56" Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.098 [INFO][5071] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-56" Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.166 [INFO][5071] ipam.go 489: Trying affinity for 192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.186 [INFO][5071] ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.196 [INFO][5071] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.196 [INFO][5071] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" host="ip-172-31-17-56" Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.203 [INFO][5071] ipam.go 1685: Creating new handle: k8s-pod-network.8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58 Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.239 [INFO][5071] ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" host="ip-172-31-17-56" Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.277 [INFO][5071] ipam.go 1216: Successfully claimed IPs: [192.168.115.132/26] block=192.168.115.128/26 handle="k8s-pod-network.8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" host="ip-172-31-17-56" Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.278 [INFO][5071] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.132/26] handle="k8s-pod-network.8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" host="ip-172-31-17-56" Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.279 [INFO][5071] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:08.411889 containerd[2007]: 2024-08-05 21:37:08.279 [INFO][5071] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.115.132/26] IPv6=[] ContainerID="8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" HandleID="k8s-pod-network.8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" Workload="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:08.413506 containerd[2007]: 2024-08-05 21:37:08.291 [INFO][5008] k8s.go 386: Populated endpoint ContainerID="8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" Namespace="calico-system" Pod="csi-node-driver-6rk7q" WorkloadEndpoint="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fc31815-4413-45c7-b4f1-d969a93d2abe", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"", Pod:"csi-node-driver-6rk7q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6714d5e7b8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:08.413506 containerd[2007]: 2024-08-05 21:37:08.293 [INFO][5008] k8s.go 387: Calico CNI using IPs: [192.168.115.132/32] ContainerID="8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" Namespace="calico-system" Pod="csi-node-driver-6rk7q" WorkloadEndpoint="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:08.413506 containerd[2007]: 2024-08-05 21:37:08.294 [INFO][5008] dataplane_linux.go 68: Setting the host side veth name to cali6714d5e7b8b ContainerID="8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" Namespace="calico-system" Pod="csi-node-driver-6rk7q" WorkloadEndpoint="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:08.413506 containerd[2007]: 2024-08-05 21:37:08.324 [INFO][5008] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" Namespace="calico-system" Pod="csi-node-driver-6rk7q" WorkloadEndpoint="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:08.413506 containerd[2007]: 2024-08-05 21:37:08.331 [INFO][5008] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" Namespace="calico-system" Pod="csi-node-driver-6rk7q" WorkloadEndpoint="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fc31815-4413-45c7-b4f1-d969a93d2abe", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58", Pod:"csi-node-driver-6rk7q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6714d5e7b8b", MAC:"3a:94:fd:d4:ca:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:08.413506 containerd[2007]: 2024-08-05 21:37:08.378 [INFO][5008] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58" Namespace="calico-system" Pod="csi-node-driver-6rk7q" WorkloadEndpoint="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:08.465955 systemd[1]: Started cri-containerd-0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254.scope - libcontainer container 0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254. Aug 5 21:37:08.503761 containerd[2007]: time="2024-08-05T21:37:08.502579891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:08.504549 containerd[2007]: time="2024-08-05T21:37:08.504095647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:08.504549 containerd[2007]: time="2024-08-05T21:37:08.504151615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:08.504549 containerd[2007]: time="2024-08-05T21:37:08.504177211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:08.592666 systemd[1]: Started cri-containerd-8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58.scope - libcontainer container 8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58. Aug 5 21:37:08.797830 systemd[1]: Started sshd@8-172.31.17.56:22-139.178.68.195:55032.service - OpenSSH per-connection server daemon (139.178.68.195:55032). Aug 5 21:37:08.906610 containerd[2007]: time="2024-08-05T21:37:08.906443937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rk7q,Uid:8fc31815-4413-45c7-b4f1-d969a93d2abe,Namespace:calico-system,Attempt:1,} returns sandbox id \"8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58\"" Aug 5 21:37:08.919823 containerd[2007]: time="2024-08-05T21:37:08.919744342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 21:37:08.984407 containerd[2007]: time="2024-08-05T21:37:08.984262546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d79b48bbd-jq6mz,Uid:85d380da-4d3c-44a8-b1e4-555530171664,Namespace:calico-system,Attempt:1,} returns sandbox id \"0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254\"" Aug 5 21:37:09.032343 sshd[5225]: Accepted publickey for core from 139.178.68.195 port 55032 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:09.041256 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:09.062638 systemd-logind[1994]: New session 9 of user core. Aug 5 21:37:09.069715 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 21:37:09.459926 sshd[5225]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:09.471983 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 21:37:09.478566 systemd[1]: sshd@8-172.31.17.56:22-139.178.68.195:55032.service: Deactivated successfully. Aug 5 21:37:09.492218 systemd-logind[1994]: Session 9 logged out. Waiting for processes to exit. Aug 5 21:37:09.495122 systemd-logind[1994]: Removed session 9. Aug 5 21:37:09.640117 systemd-networkd[1845]: cali6714d5e7b8b: Gained IPv6LL Aug 5 21:37:09.640762 systemd-networkd[1845]: cali9adb65507e2: Gained IPv6LL Aug 5 21:37:09.698187 kubelet[3485]: I0805 21:37:09.697626 3485 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pkkts" podStartSLOduration=40.696672201 podCreationTimestamp="2024-08-05 21:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:37:08.714332913 +0000 UTC m=+52.056976352" watchObservedRunningTime="2024-08-05 21:37:09.696672201 +0000 UTC m=+53.039315700" Aug 5 21:37:10.722400 containerd[2007]: time="2024-08-05T21:37:10.720899519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:10.723664 containerd[2007]: time="2024-08-05T21:37:10.723588263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Aug 5 21:37:10.725795 containerd[2007]: time="2024-08-05T21:37:10.725718575Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:10.735810 containerd[2007]: time="2024-08-05T21:37:10.735723287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:10.738937 containerd[2007]: time="2024-08-05T21:37:10.737746703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.817884809s" Aug 5 21:37:10.738937 containerd[2007]: time="2024-08-05T21:37:10.737882495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Aug 5 21:37:10.742394 containerd[2007]: time="2024-08-05T21:37:10.742252355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 21:37:10.744651 containerd[2007]: time="2024-08-05T21:37:10.744600215Z" level=info msg="CreateContainer within sandbox \"8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 21:37:10.800527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556444104.mount: Deactivated successfully. Aug 5 21:37:10.805583 containerd[2007]: time="2024-08-05T21:37:10.805412027Z" level=info msg="CreateContainer within sandbox \"8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4303532571a09dfef03b3a27f9a983ae4b7592c6a7bf73b002ff218d2016aca4\"" Aug 5 21:37:10.815735 containerd[2007]: time="2024-08-05T21:37:10.815618075Z" level=info msg="StartContainer for \"4303532571a09dfef03b3a27f9a983ae4b7592c6a7bf73b002ff218d2016aca4\"" Aug 5 21:37:10.940799 systemd[1]: Started cri-containerd-4303532571a09dfef03b3a27f9a983ae4b7592c6a7bf73b002ff218d2016aca4.scope - libcontainer container 4303532571a09dfef03b3a27f9a983ae4b7592c6a7bf73b002ff218d2016aca4. Aug 5 21:37:11.125280 containerd[2007]: time="2024-08-05T21:37:11.125177865Z" level=info msg="StartContainer for \"4303532571a09dfef03b3a27f9a983ae4b7592c6a7bf73b002ff218d2016aca4\" returns successfully" Aug 5 21:37:12.257964 ntpd[1987]: Listen normally on 7 vxlan.calico 192.168.115.128:123 Aug 5 21:37:12.260629 ntpd[1987]: 5 Aug 21:37:12 ntpd[1987]: Listen normally on 7 vxlan.calico 192.168.115.128:123 Aug 5 21:37:12.260629 ntpd[1987]: 5 Aug 21:37:12 ntpd[1987]: Listen normally on 8 cali041c8dbd672 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 5 21:37:12.260629 ntpd[1987]: 5 Aug 21:37:12 ntpd[1987]: Listen normally on 9 vxlan.calico [fe80::64f1:45ff:feda:4ef8%5]:123 Aug 5 21:37:12.258234 ntpd[1987]: Listen normally on 8 cali041c8dbd672 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 5 21:37:12.264036 ntpd[1987]: 5 Aug 21:37:12 ntpd[1987]: Listen normally on 10 caliad10c325e39 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 21:37:12.264036 ntpd[1987]: 5 Aug 21:37:12 ntpd[1987]: Listen normally on 11 cali9adb65507e2 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 21:37:12.264036 ntpd[1987]: 5 Aug 21:37:12 ntpd[1987]: Listen normally on 12 cali6714d5e7b8b [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 21:37:12.258343 ntpd[1987]: Listen normally on 9 vxlan.calico [fe80::64f1:45ff:feda:4ef8%5]:123 Aug 5 21:37:12.261079 ntpd[1987]: Listen normally on 10 caliad10c325e39 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 21:37:12.261217 ntpd[1987]: Listen normally on 11 cali9adb65507e2 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 21:37:12.261292 ntpd[1987]: Listen normally on 12 cali6714d5e7b8b [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 21:37:14.079298 containerd[2007]: time="2024-08-05T21:37:14.077413979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:14.087422 containerd[2007]: time="2024-08-05T21:37:14.087257711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Aug 5 21:37:14.103580 containerd[2007]: time="2024-08-05T21:37:14.103470347Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:14.122184 containerd[2007]: time="2024-08-05T21:37:14.122099027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:14.125889 containerd[2007]: time="2024-08-05T21:37:14.125770787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 3.383042572s" Aug 5 21:37:14.125889 containerd[2007]: time="2024-08-05T21:37:14.125860355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Aug 5 21:37:14.141408 containerd[2007]: time="2024-08-05T21:37:14.139764023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 21:37:14.208888 containerd[2007]: time="2024-08-05T21:37:14.208821336Z" level=info msg="CreateContainer within sandbox \"0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 21:37:14.239027 containerd[2007]: time="2024-08-05T21:37:14.238881384Z" level=info msg="CreateContainer within sandbox \"0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"803410d1a08c9dfe3855bf34aae6b0bc46276579b7b5643a60ed4ceda667965c\"" Aug 5 21:37:14.246074 containerd[2007]: time="2024-08-05T21:37:14.245999700Z" level=info msg="StartContainer for \"803410d1a08c9dfe3855bf34aae6b0bc46276579b7b5643a60ed4ceda667965c\"" Aug 5 21:37:14.351801 systemd[1]: Started cri-containerd-803410d1a08c9dfe3855bf34aae6b0bc46276579b7b5643a60ed4ceda667965c.scope - libcontainer container 803410d1a08c9dfe3855bf34aae6b0bc46276579b7b5643a60ed4ceda667965c. Aug 5 21:37:14.514305 systemd[1]: Started sshd@9-172.31.17.56:22-139.178.68.195:36100.service - OpenSSH per-connection server daemon (139.178.68.195:36100). Aug 5 21:37:14.611476 containerd[2007]: time="2024-08-05T21:37:14.611354930Z" level=info msg="StartContainer for \"803410d1a08c9dfe3855bf34aae6b0bc46276579b7b5643a60ed4ceda667965c\" returns successfully" Aug 5 21:37:14.760199 sshd[5336]: Accepted publickey for core from 139.178.68.195 port 36100 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:14.766791 sshd[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:14.787561 systemd-logind[1994]: New session 10 of user core. Aug 5 21:37:14.798096 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 21:37:15.235033 sshd[5336]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:15.248731 systemd[1]: sshd@9-172.31.17.56:22-139.178.68.195:36100.service: Deactivated successfully. Aug 5 21:37:15.260160 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 21:37:15.263710 systemd-logind[1994]: Session 10 logged out. Waiting for processes to exit. Aug 5 21:37:15.301483 systemd[1]: Started sshd@10-172.31.17.56:22-139.178.68.195:36106.service - OpenSSH per-connection server daemon (139.178.68.195:36106). Aug 5 21:37:15.305053 systemd-logind[1994]: Removed session 10. Aug 5 21:37:15.511218 sshd[5361]: Accepted publickey for core from 139.178.68.195 port 36106 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:15.520577 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:15.543238 systemd-logind[1994]: New session 11 of user core. Aug 5 21:37:15.556212 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 21:37:16.204627 kubelet[3485]: I0805 21:37:16.204049 3485 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d79b48bbd-jq6mz" podStartSLOduration=34.058773117 podCreationTimestamp="2024-08-05 21:36:37 +0000 UTC" firstStartedPulling="2024-08-05 21:37:08.989663182 +0000 UTC m=+52.332306609" lastFinishedPulling="2024-08-05 21:37:14.134822351 +0000 UTC m=+57.477465778" observedRunningTime="2024-08-05 21:37:14.807736755 +0000 UTC m=+58.150380374" watchObservedRunningTime="2024-08-05 21:37:16.203932286 +0000 UTC m=+59.546575713" Aug 5 21:37:16.781150 sshd[5361]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:16.792814 systemd[1]: sshd@10-172.31.17.56:22-139.178.68.195:36106.service: Deactivated successfully. Aug 5 21:37:16.806992 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 21:37:16.816008 systemd-logind[1994]: Session 11 logged out. Waiting for processes to exit. Aug 5 21:37:16.847333 systemd[1]: Started sshd@11-172.31.17.56:22-139.178.68.195:36118.service - OpenSSH per-connection server daemon (139.178.68.195:36118). Aug 5 21:37:16.852624 systemd-logind[1994]: Removed session 11. Aug 5 21:37:17.075633 containerd[2007]: time="2024-08-05T21:37:17.075041114Z" level=info msg="StopPodSandbox for \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\"" Aug 5 21:37:17.083345 sshd[5397]: Accepted publickey for core from 139.178.68.195 port 36118 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:17.099685 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:17.134529 systemd-logind[1994]: New session 12 of user core. Aug 5 21:37:17.146549 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 21:37:17.626873 containerd[2007]: 2024-08-05 21:37:17.265 [WARNING][5413] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0", GenerateName:"calico-kube-controllers-6d79b48bbd-", Namespace:"calico-system", SelfLink:"", UID:"85d380da-4d3c-44a8-b1e4-555530171664", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d79b48bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254", Pod:"calico-kube-controllers-6d79b48bbd-jq6mz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9adb65507e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:17.626873 containerd[2007]: 2024-08-05 21:37:17.269 [INFO][5413] k8s.go 608: Cleaning up netns ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:37:17.626873 containerd[2007]: 2024-08-05 21:37:17.269 [INFO][5413] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" iface="eth0" netns="" Aug 5 21:37:17.626873 containerd[2007]: 2024-08-05 21:37:17.269 [INFO][5413] k8s.go 615: Releasing IP address(es) ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:37:17.626873 containerd[2007]: 2024-08-05 21:37:17.269 [INFO][5413] utils.go 188: Calico CNI releasing IP address ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:37:17.626873 containerd[2007]: 2024-08-05 21:37:17.506 [INFO][5423] ipam_plugin.go 411: Releasing address using handleID ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" HandleID="k8s-pod-network.621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Workload="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:17.626873 containerd[2007]: 2024-08-05 21:37:17.512 [INFO][5423] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:17.626873 containerd[2007]: 2024-08-05 21:37:17.512 [INFO][5423] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:17.626873 containerd[2007]: 2024-08-05 21:37:17.599 [WARNING][5423] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" HandleID="k8s-pod-network.621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Workload="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:17.626873 containerd[2007]: 2024-08-05 21:37:17.599 [INFO][5423] ipam_plugin.go 439: Releasing address using workloadID ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" HandleID="k8s-pod-network.621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Workload="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:17.626873 containerd[2007]: 2024-08-05 21:37:17.611 [INFO][5423] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:17.626873 containerd[2007]: 2024-08-05 21:37:17.618 [INFO][5413] k8s.go 621: Teardown processing complete. ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:37:17.631692 containerd[2007]: time="2024-08-05T21:37:17.631195289Z" level=info msg="TearDown network for sandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\" successfully" Aug 5 21:37:17.631692 containerd[2007]: time="2024-08-05T21:37:17.631251413Z" level=info msg="StopPodSandbox for \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\" returns successfully" Aug 5 21:37:17.639601 containerd[2007]: time="2024-08-05T21:37:17.637084061Z" level=info msg="RemovePodSandbox for \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\"" Aug 5 21:37:17.639601 containerd[2007]: time="2024-08-05T21:37:17.637191833Z" level=info msg="Forcibly stopping sandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\"" Aug 5 21:37:17.685053 sshd[5397]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:17.703028 systemd[1]: sshd@11-172.31.17.56:22-139.178.68.195:36118.service: Deactivated successfully. Aug 5 21:37:17.719127 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 21:37:17.724014 systemd-logind[1994]: Session 12 logged out. Waiting for processes to exit. Aug 5 21:37:17.733747 systemd-logind[1994]: Removed session 12. Aug 5 21:37:18.242074 containerd[2007]: 2024-08-05 21:37:17.894 [WARNING][5451] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0", GenerateName:"calico-kube-controllers-6d79b48bbd-", Namespace:"calico-system", SelfLink:"", UID:"85d380da-4d3c-44a8-b1e4-555530171664", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d79b48bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"0520eb07236e17d81d87b220f1997d312db51807989956debb4479ecb05bf254", Pod:"calico-kube-controllers-6d79b48bbd-jq6mz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9adb65507e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:18.242074 containerd[2007]: 2024-08-05 21:37:17.895 [INFO][5451] k8s.go 608: Cleaning up netns ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:37:18.242074 containerd[2007]: 2024-08-05 21:37:17.895 [INFO][5451] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" iface="eth0" netns="" Aug 5 21:37:18.242074 containerd[2007]: 2024-08-05 21:37:17.895 [INFO][5451] k8s.go 615: Releasing IP address(es) ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:37:18.242074 containerd[2007]: 2024-08-05 21:37:17.895 [INFO][5451] utils.go 188: Calico CNI releasing IP address ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:37:18.242074 containerd[2007]: 2024-08-05 21:37:18.180 [INFO][5461] ipam_plugin.go 411: Releasing address using handleID ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" HandleID="k8s-pod-network.621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Workload="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:18.242074 containerd[2007]: 2024-08-05 21:37:18.181 [INFO][5461] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:18.242074 containerd[2007]: 2024-08-05 21:37:18.182 [INFO][5461] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:18.242074 containerd[2007]: 2024-08-05 21:37:18.208 [WARNING][5461] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" HandleID="k8s-pod-network.621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Workload="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:18.242074 containerd[2007]: 2024-08-05 21:37:18.208 [INFO][5461] ipam_plugin.go 439: Releasing address using workloadID ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" HandleID="k8s-pod-network.621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Workload="ip--172--31--17--56-k8s-calico--kube--controllers--6d79b48bbd--jq6mz-eth0" Aug 5 21:37:18.242074 containerd[2007]: 2024-08-05 21:37:18.215 [INFO][5461] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:18.242074 containerd[2007]: 2024-08-05 21:37:18.230 [INFO][5451] k8s.go 621: Teardown processing complete. ContainerID="621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0" Aug 5 21:37:18.242074 containerd[2007]: time="2024-08-05T21:37:18.240871168Z" level=info msg="TearDown network for sandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\" successfully" Aug 5 21:37:18.267295 containerd[2007]: time="2024-08-05T21:37:18.266581516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:37:18.267295 containerd[2007]: time="2024-08-05T21:37:18.266942428Z" level=info msg="RemovePodSandbox \"621fe893af6af94573ab664c7690c90ef8b5e6f83a78f8c8592b23c67e636ff0\" returns successfully" Aug 5 21:37:18.270103 containerd[2007]: time="2024-08-05T21:37:18.269856004Z" level=info msg="StopPodSandbox for \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\"" Aug 5 21:37:18.279409 containerd[2007]: time="2024-08-05T21:37:18.279297448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Aug 5 21:37:18.282464 containerd[2007]: time="2024-08-05T21:37:18.282312652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:18.290750 containerd[2007]: time="2024-08-05T21:37:18.290037400Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:18.295814 containerd[2007]: time="2024-08-05T21:37:18.295669552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:18.308295 containerd[2007]: time="2024-08-05T21:37:18.308211832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 4.168327197s" Aug 5 21:37:18.309334 containerd[2007]: time="2024-08-05T21:37:18.309121672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Aug 5 21:37:18.317840 containerd[2007]: time="2024-08-05T21:37:18.317625844Z" level=info msg="CreateContainer within sandbox \"8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 21:37:18.385243 containerd[2007]: time="2024-08-05T21:37:18.384775217Z" level=info msg="CreateContainer within sandbox \"8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c53781f3ba3a27466995a2cc9dcb4b095db43c12ee8928357f230c89d754a3d8\"" Aug 5 21:37:18.390570 containerd[2007]: time="2024-08-05T21:37:18.388810109Z" level=info msg="StartContainer for \"c53781f3ba3a27466995a2cc9dcb4b095db43c12ee8928357f230c89d754a3d8\"" Aug 5 21:37:18.510992 systemd[1]: run-containerd-runc-k8s.io-c53781f3ba3a27466995a2cc9dcb4b095db43c12ee8928357f230c89d754a3d8-runc.LhftXV.mount: Deactivated successfully. Aug 5 21:37:18.536730 systemd[1]: Started cri-containerd-c53781f3ba3a27466995a2cc9dcb4b095db43c12ee8928357f230c89d754a3d8.scope - libcontainer container c53781f3ba3a27466995a2cc9dcb4b095db43c12ee8928357f230c89d754a3d8. Aug 5 21:37:18.704347 containerd[2007]: time="2024-08-05T21:37:18.704193822Z" level=info msg="StartContainer for \"c53781f3ba3a27466995a2cc9dcb4b095db43c12ee8928357f230c89d754a3d8\" returns successfully" Aug 5 21:37:18.726948 containerd[2007]: 2024-08-05 21:37:18.533 [WARNING][5483] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58", Pod:"coredns-5dd5756b68-pkkts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad10c325e39", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:18.726948 containerd[2007]: 2024-08-05 21:37:18.534 [INFO][5483] k8s.go 608: Cleaning up netns ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:37:18.726948 containerd[2007]: 2024-08-05 21:37:18.534 [INFO][5483] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" iface="eth0" netns="" Aug 5 21:37:18.726948 containerd[2007]: 2024-08-05 21:37:18.534 [INFO][5483] k8s.go 615: Releasing IP address(es) ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:37:18.726948 containerd[2007]: 2024-08-05 21:37:18.534 [INFO][5483] utils.go 188: Calico CNI releasing IP address ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:37:18.726948 containerd[2007]: 2024-08-05 21:37:18.677 [INFO][5507] ipam_plugin.go 411: Releasing address using handleID ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" HandleID="k8s-pod-network.a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:18.726948 containerd[2007]: 2024-08-05 21:37:18.679 [INFO][5507] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:18.726948 containerd[2007]: 2024-08-05 21:37:18.679 [INFO][5507] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:18.726948 containerd[2007]: 2024-08-05 21:37:18.703 [WARNING][5507] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" HandleID="k8s-pod-network.a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:18.726948 containerd[2007]: 2024-08-05 21:37:18.703 [INFO][5507] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" HandleID="k8s-pod-network.a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:18.726948 containerd[2007]: 2024-08-05 21:37:18.713 [INFO][5507] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:18.726948 containerd[2007]: 2024-08-05 21:37:18.722 [INFO][5483] k8s.go 621: Teardown processing complete. ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:37:18.730115 containerd[2007]: time="2024-08-05T21:37:18.727512378Z" level=info msg="TearDown network for sandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\" successfully" Aug 5 21:37:18.730115 containerd[2007]: time="2024-08-05T21:37:18.727596126Z" level=info msg="StopPodSandbox for \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\" returns successfully" Aug 5 21:37:18.733110 containerd[2007]: time="2024-08-05T21:37:18.732198750Z" level=info msg="RemovePodSandbox for \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\"" Aug 5 21:37:18.733110 containerd[2007]: time="2024-08-05T21:37:18.732314262Z" level=info msg="Forcibly stopping sandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\"" Aug 5 21:37:19.049894 containerd[2007]: 2024-08-05 21:37:18.916 [WARNING][5542] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a55bb9e9-fa9b-4cb6-8143-9c7be22a3c48", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"4aa694d60f12ef4e8336e763aaafec064c4e48907e807615cee45c577b787f58", Pod:"coredns-5dd5756b68-pkkts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad10c325e39", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:19.049894 containerd[2007]: 2024-08-05 21:37:18.918 [INFO][5542] k8s.go 608: Cleaning up netns ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:37:19.049894 containerd[2007]: 2024-08-05 21:37:18.918 [INFO][5542] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" iface="eth0" netns="" Aug 5 21:37:19.049894 containerd[2007]: 2024-08-05 21:37:18.918 [INFO][5542] k8s.go 615: Releasing IP address(es) ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:37:19.049894 containerd[2007]: 2024-08-05 21:37:18.918 [INFO][5542] utils.go 188: Calico CNI releasing IP address ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:37:19.049894 containerd[2007]: 2024-08-05 21:37:18.999 [INFO][5552] ipam_plugin.go 411: Releasing address using handleID ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" HandleID="k8s-pod-network.a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:19.049894 containerd[2007]: 2024-08-05 21:37:19.000 [INFO][5552] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:19.049894 containerd[2007]: 2024-08-05 21:37:19.000 [INFO][5552] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:19.049894 containerd[2007]: 2024-08-05 21:37:19.026 [WARNING][5552] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" HandleID="k8s-pod-network.a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:19.049894 containerd[2007]: 2024-08-05 21:37:19.027 [INFO][5552] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" HandleID="k8s-pod-network.a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--pkkts-eth0" Aug 5 21:37:19.049894 containerd[2007]: 2024-08-05 21:37:19.038 [INFO][5552] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:19.049894 containerd[2007]: 2024-08-05 21:37:19.044 [INFO][5542] k8s.go 621: Teardown processing complete. ContainerID="a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf" Aug 5 21:37:19.052403 containerd[2007]: time="2024-08-05T21:37:19.050635336Z" level=info msg="TearDown network for sandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\" successfully" Aug 5 21:37:19.070398 containerd[2007]: time="2024-08-05T21:37:19.068994088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:37:19.070398 containerd[2007]: time="2024-08-05T21:37:19.069271060Z" level=info msg="RemovePodSandbox \"a48643b2a0c014c3b258f880f3dbe95e0839a996c3a55d8cf15c158f898af3bf\" returns successfully" Aug 5 21:37:19.077744 containerd[2007]: time="2024-08-05T21:37:19.076052644Z" level=info msg="StopPodSandbox for \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\"" Aug 5 21:37:19.313307 containerd[2007]: 2024-08-05 21:37:19.206 [WARNING][5571] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"39e5d80d-eef5-4d09-a308-7bc48d19044c", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba", Pod:"coredns-5dd5756b68-mffms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali041c8dbd672", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:19.313307 containerd[2007]: 2024-08-05 21:37:19.206 [INFO][5571] k8s.go 608: Cleaning up netns ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:37:19.313307 containerd[2007]: 2024-08-05 21:37:19.206 [INFO][5571] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" iface="eth0" netns="" Aug 5 21:37:19.313307 containerd[2007]: 2024-08-05 21:37:19.206 [INFO][5571] k8s.go 615: Releasing IP address(es) ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:37:19.313307 containerd[2007]: 2024-08-05 21:37:19.206 [INFO][5571] utils.go 188: Calico CNI releasing IP address ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:37:19.313307 containerd[2007]: 2024-08-05 21:37:19.282 [INFO][5578] ipam_plugin.go 411: Releasing address using handleID ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" HandleID="k8s-pod-network.48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:19.313307 containerd[2007]: 2024-08-05 21:37:19.283 [INFO][5578] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:19.313307 containerd[2007]: 2024-08-05 21:37:19.283 [INFO][5578] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:19.313307 containerd[2007]: 2024-08-05 21:37:19.301 [WARNING][5578] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" HandleID="k8s-pod-network.48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:19.313307 containerd[2007]: 2024-08-05 21:37:19.301 [INFO][5578] ipam_plugin.go 439: Releasing address using workloadID ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" HandleID="k8s-pod-network.48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:19.313307 containerd[2007]: 2024-08-05 21:37:19.305 [INFO][5578] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:19.313307 containerd[2007]: 2024-08-05 21:37:19.307 [INFO][5571] k8s.go 621: Teardown processing complete. ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:37:19.318464 containerd[2007]: time="2024-08-05T21:37:19.315525137Z" level=info msg="TearDown network for sandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\" successfully" Aug 5 21:37:19.318464 containerd[2007]: time="2024-08-05T21:37:19.315706997Z" level=info msg="StopPodSandbox for \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\" returns successfully" Aug 5 21:37:19.318464 containerd[2007]: time="2024-08-05T21:37:19.316599101Z" level=info msg="RemovePodSandbox for \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\"" Aug 5 21:37:19.318464 containerd[2007]: time="2024-08-05T21:37:19.316649801Z" level=info msg="Forcibly stopping sandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\"" Aug 5 21:37:19.375140 kubelet[3485]: I0805 21:37:19.374959 3485 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 21:37:19.376403 kubelet[3485]: I0805 21:37:19.375992 3485 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 21:37:19.593157 containerd[2007]: 2024-08-05 21:37:19.468 [WARNING][5596] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"39e5d80d-eef5-4d09-a308-7bc48d19044c", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"7d7c206eec9df146f9384b70843486e5d3f37fc5ceb1efb36bf8424b9c83cdba", Pod:"coredns-5dd5756b68-mffms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali041c8dbd672", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:19.593157 containerd[2007]: 2024-08-05 21:37:19.470 [INFO][5596] k8s.go 608: Cleaning up netns ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:37:19.593157 containerd[2007]: 2024-08-05 21:37:19.470 [INFO][5596] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" iface="eth0" netns="" Aug 5 21:37:19.593157 containerd[2007]: 2024-08-05 21:37:19.472 [INFO][5596] k8s.go 615: Releasing IP address(es) ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:37:19.593157 containerd[2007]: 2024-08-05 21:37:19.472 [INFO][5596] utils.go 188: Calico CNI releasing IP address ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:37:19.593157 containerd[2007]: 2024-08-05 21:37:19.555 [INFO][5603] ipam_plugin.go 411: Releasing address using handleID ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" HandleID="k8s-pod-network.48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:19.593157 containerd[2007]: 2024-08-05 21:37:19.556 [INFO][5603] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:19.593157 containerd[2007]: 2024-08-05 21:37:19.557 [INFO][5603] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:19.593157 containerd[2007]: 2024-08-05 21:37:19.576 [WARNING][5603] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" HandleID="k8s-pod-network.48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:19.593157 containerd[2007]: 2024-08-05 21:37:19.577 [INFO][5603] ipam_plugin.go 439: Releasing address using workloadID ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" HandleID="k8s-pod-network.48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Workload="ip--172--31--17--56-k8s-coredns--5dd5756b68--mffms-eth0" Aug 5 21:37:19.593157 containerd[2007]: 2024-08-05 21:37:19.582 [INFO][5603] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:19.593157 containerd[2007]: 2024-08-05 21:37:19.587 [INFO][5596] k8s.go 621: Teardown processing complete. ContainerID="48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88" Aug 5 21:37:19.594182 containerd[2007]: time="2024-08-05T21:37:19.593168083Z" level=info msg="TearDown network for sandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\" successfully" Aug 5 21:37:19.604548 containerd[2007]: time="2024-08-05T21:37:19.604464091Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:37:19.604708 containerd[2007]: time="2024-08-05T21:37:19.604590835Z" level=info msg="RemovePodSandbox \"48a433d7ab461efad9d1ebc54716d1cdd69b51dc07f08a6be41463cfea7b0a88\" returns successfully" Aug 5 21:37:19.605616 containerd[2007]: time="2024-08-05T21:37:19.605313835Z" level=info msg="StopPodSandbox for \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\"" Aug 5 21:37:19.822067 containerd[2007]: 2024-08-05 21:37:19.709 [WARNING][5621] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fc31815-4413-45c7-b4f1-d969a93d2abe", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58", Pod:"csi-node-driver-6rk7q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6714d5e7b8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:19.822067 containerd[2007]: 2024-08-05 21:37:19.710 [INFO][5621] k8s.go 608: Cleaning up netns ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:37:19.822067 containerd[2007]: 2024-08-05 21:37:19.710 [INFO][5621] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" iface="eth0" netns="" Aug 5 21:37:19.822067 containerd[2007]: 2024-08-05 21:37:19.710 [INFO][5621] k8s.go 615: Releasing IP address(es) ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:37:19.822067 containerd[2007]: 2024-08-05 21:37:19.710 [INFO][5621] utils.go 188: Calico CNI releasing IP address ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:37:19.822067 containerd[2007]: 2024-08-05 21:37:19.774 [INFO][5627] ipam_plugin.go 411: Releasing address using handleID ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" HandleID="k8s-pod-network.b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Workload="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:19.822067 containerd[2007]: 2024-08-05 21:37:19.775 [INFO][5627] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:19.822067 containerd[2007]: 2024-08-05 21:37:19.775 [INFO][5627] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:19.822067 containerd[2007]: 2024-08-05 21:37:19.797 [WARNING][5627] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" HandleID="k8s-pod-network.b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Workload="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:19.822067 containerd[2007]: 2024-08-05 21:37:19.799 [INFO][5627] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" HandleID="k8s-pod-network.b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Workload="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:19.822067 containerd[2007]: 2024-08-05 21:37:19.808 [INFO][5627] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:19.822067 containerd[2007]: 2024-08-05 21:37:19.816 [INFO][5621] k8s.go 621: Teardown processing complete. ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:37:19.822067 containerd[2007]: time="2024-08-05T21:37:19.821781824Z" level=info msg="TearDown network for sandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\" successfully" Aug 5 21:37:19.822067 containerd[2007]: time="2024-08-05T21:37:19.821824184Z" level=info msg="StopPodSandbox for \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\" returns successfully" Aug 5 21:37:19.824913 containerd[2007]: time="2024-08-05T21:37:19.823786376Z" level=info msg="RemovePodSandbox for \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\"" Aug 5 21:37:19.824913 containerd[2007]: time="2024-08-05T21:37:19.823846904Z" level=info msg="Forcibly stopping sandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\"" Aug 5 21:37:20.080713 containerd[2007]: 2024-08-05 21:37:19.953 [WARNING][5645] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fc31815-4413-45c7-b4f1-d969a93d2abe", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"8dafcbe872d4a6634972deca662bfd009328721c485fd285e8d59bc2b2613f58", Pod:"csi-node-driver-6rk7q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6714d5e7b8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:20.080713 containerd[2007]: 2024-08-05 21:37:19.954 [INFO][5645] k8s.go 608: Cleaning up netns ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:37:20.080713 containerd[2007]: 2024-08-05 21:37:19.954 [INFO][5645] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" iface="eth0" netns="" Aug 5 21:37:20.080713 containerd[2007]: 2024-08-05 21:37:19.954 [INFO][5645] k8s.go 615: Releasing IP address(es) ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:37:20.080713 containerd[2007]: 2024-08-05 21:37:19.954 [INFO][5645] utils.go 188: Calico CNI releasing IP address ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:37:20.080713 containerd[2007]: 2024-08-05 21:37:20.036 [INFO][5652] ipam_plugin.go 411: Releasing address using handleID ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" HandleID="k8s-pod-network.b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Workload="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:20.080713 containerd[2007]: 2024-08-05 21:37:20.036 [INFO][5652] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:20.080713 containerd[2007]: 2024-08-05 21:37:20.036 [INFO][5652] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:20.080713 containerd[2007]: 2024-08-05 21:37:20.062 [WARNING][5652] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" HandleID="k8s-pod-network.b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Workload="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:20.080713 containerd[2007]: 2024-08-05 21:37:20.063 [INFO][5652] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" HandleID="k8s-pod-network.b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Workload="ip--172--31--17--56-k8s-csi--node--driver--6rk7q-eth0" Aug 5 21:37:20.080713 containerd[2007]: 2024-08-05 21:37:20.073 [INFO][5652] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:20.080713 containerd[2007]: 2024-08-05 21:37:20.077 [INFO][5645] k8s.go 621: Teardown processing complete. ContainerID="b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e" Aug 5 21:37:20.082897 containerd[2007]: time="2024-08-05T21:37:20.081721241Z" level=info msg="TearDown network for sandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\" successfully" Aug 5 21:37:20.090530 containerd[2007]: time="2024-08-05T21:37:20.090430781Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:37:20.090865 containerd[2007]: time="2024-08-05T21:37:20.090827345Z" level=info msg="RemovePodSandbox \"b4b92c6bd2c16085a4b18f67aeb4c2826c667cb2bf40f4221388cb3df7c84f8e\" returns successfully" Aug 5 21:37:22.741287 systemd[1]: Started sshd@12-172.31.17.56:22-139.178.68.195:58218.service - OpenSSH per-connection server daemon (139.178.68.195:58218). Aug 5 21:37:22.945468 sshd[5682]: Accepted publickey for core from 139.178.68.195 port 58218 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:22.949046 sshd[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:22.961952 systemd-logind[1994]: New session 13 of user core. Aug 5 21:37:22.969223 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 21:37:23.250356 sshd[5682]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:23.259625 systemd-logind[1994]: Session 13 logged out. Waiting for processes to exit. Aug 5 21:37:23.261635 systemd[1]: sshd@12-172.31.17.56:22-139.178.68.195:58218.service: Deactivated successfully. Aug 5 21:37:23.267028 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 21:37:23.273591 systemd-logind[1994]: Removed session 13. Aug 5 21:37:28.296410 systemd[1]: Started sshd@13-172.31.17.56:22-139.178.68.195:58234.service - OpenSSH per-connection server daemon (139.178.68.195:58234). Aug 5 21:37:28.488795 sshd[5706]: Accepted publickey for core from 139.178.68.195 port 58234 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:28.492821 sshd[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:28.507359 systemd-logind[1994]: New session 14 of user core. Aug 5 21:37:28.519040 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 21:37:28.778652 sshd[5706]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:28.786646 systemd[1]: sshd@13-172.31.17.56:22-139.178.68.195:58234.service: Deactivated successfully. Aug 5 21:37:28.793179 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 21:37:28.796029 systemd-logind[1994]: Session 14 logged out. Waiting for processes to exit. Aug 5 21:37:28.799342 systemd-logind[1994]: Removed session 14. Aug 5 21:37:30.748448 kubelet[3485]: I0805 21:37:30.748319 3485 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-6rk7q" podStartSLOduration=44.351846176 podCreationTimestamp="2024-08-05 21:36:37 +0000 UTC" firstStartedPulling="2024-08-05 21:37:08.913213222 +0000 UTC m=+52.255856637" lastFinishedPulling="2024-08-05 21:37:18.309622372 +0000 UTC m=+61.652265799" observedRunningTime="2024-08-05 21:37:18.837050383 +0000 UTC m=+62.179693954" watchObservedRunningTime="2024-08-05 21:37:30.748255338 +0000 UTC m=+74.090898777" Aug 5 21:37:33.827241 systemd[1]: Started sshd@14-172.31.17.56:22-139.178.68.195:55906.service - OpenSSH per-connection server daemon (139.178.68.195:55906). Aug 5 21:37:34.018051 sshd[5745]: Accepted publickey for core from 139.178.68.195 port 55906 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:34.021557 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:34.037195 systemd-logind[1994]: New session 15 of user core. Aug 5 21:37:34.047694 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 21:37:34.319567 sshd[5745]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:34.326516 systemd[1]: sshd@14-172.31.17.56:22-139.178.68.195:55906.service: Deactivated successfully. Aug 5 21:37:34.330961 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 21:37:34.336529 systemd-logind[1994]: Session 15 logged out. Waiting for processes to exit. Aug 5 21:37:34.338610 systemd-logind[1994]: Removed session 15. Aug 5 21:37:39.378127 systemd[1]: Started sshd@15-172.31.17.56:22-139.178.68.195:55920.service - OpenSSH per-connection server daemon (139.178.68.195:55920). Aug 5 21:37:39.591286 sshd[5762]: Accepted publickey for core from 139.178.68.195 port 55920 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:39.596461 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:39.617891 systemd-logind[1994]: New session 16 of user core. Aug 5 21:37:39.630206 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 21:37:39.989688 sshd[5762]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:40.006752 systemd[1]: sshd@15-172.31.17.56:22-139.178.68.195:55920.service: Deactivated successfully. Aug 5 21:37:40.017192 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 21:37:40.027093 systemd-logind[1994]: Session 16 logged out. Waiting for processes to exit. Aug 5 21:37:40.059334 systemd[1]: Started sshd@16-172.31.17.56:22-139.178.68.195:55924.service - OpenSSH per-connection server daemon (139.178.68.195:55924). Aug 5 21:37:40.068621 systemd-logind[1994]: Removed session 16. Aug 5 21:37:40.257068 sshd[5775]: Accepted publickey for core from 139.178.68.195 port 55924 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:40.260092 sshd[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:40.276987 systemd-logind[1994]: New session 17 of user core. Aug 5 21:37:40.285884 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 21:37:40.890892 sshd[5775]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:40.900655 systemd[1]: sshd@16-172.31.17.56:22-139.178.68.195:55924.service: Deactivated successfully. Aug 5 21:37:40.910909 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 21:37:40.914595 systemd-logind[1994]: Session 17 logged out. Waiting for processes to exit. Aug 5 21:37:40.941730 systemd[1]: Started sshd@17-172.31.17.56:22-139.178.68.195:55608.service - OpenSSH per-connection server daemon (139.178.68.195:55608). Aug 5 21:37:40.945831 systemd-logind[1994]: Removed session 17. Aug 5 21:37:41.155723 sshd[5785]: Accepted publickey for core from 139.178.68.195 port 55608 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:41.161157 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:41.183602 systemd-logind[1994]: New session 18 of user core. Aug 5 21:37:41.189877 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 21:37:43.118753 sshd[5785]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:43.132248 systemd[1]: sshd@17-172.31.17.56:22-139.178.68.195:55608.service: Deactivated successfully. Aug 5 21:37:43.144800 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 21:37:43.150011 systemd[1]: session-18.scope: Consumed 1.206s CPU time. Aug 5 21:37:43.178678 systemd-logind[1994]: Session 18 logged out. Waiting for processes to exit. Aug 5 21:37:43.189427 systemd[1]: Started sshd@18-172.31.17.56:22-139.178.68.195:55614.service - OpenSSH per-connection server daemon (139.178.68.195:55614). Aug 5 21:37:43.199705 systemd-logind[1994]: Removed session 18. Aug 5 21:37:43.403815 sshd[5805]: Accepted publickey for core from 139.178.68.195 port 55614 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:43.410196 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:43.425681 systemd-logind[1994]: New session 19 of user core. Aug 5 21:37:43.435155 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 21:37:44.387028 sshd[5805]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:44.403192 systemd[1]: sshd@18-172.31.17.56:22-139.178.68.195:55614.service: Deactivated successfully. Aug 5 21:37:44.411288 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 21:37:44.415579 systemd-logind[1994]: Session 19 logged out. Waiting for processes to exit. Aug 5 21:37:44.438090 systemd[1]: Started sshd@19-172.31.17.56:22-139.178.68.195:55630.service - OpenSSH per-connection server daemon (139.178.68.195:55630). Aug 5 21:37:44.441456 systemd-logind[1994]: Removed session 19. Aug 5 21:37:44.642860 sshd[5818]: Accepted publickey for core from 139.178.68.195 port 55630 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:44.648662 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:44.667138 systemd-logind[1994]: New session 20 of user core. Aug 5 21:37:44.676045 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 21:37:44.987062 sshd[5818]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:44.998863 systemd[1]: sshd@19-172.31.17.56:22-139.178.68.195:55630.service: Deactivated successfully. Aug 5 21:37:45.006635 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 21:37:45.010123 systemd-logind[1994]: Session 20 logged out. Waiting for processes to exit. Aug 5 21:37:45.013135 systemd-logind[1994]: Removed session 20. Aug 5 21:37:50.038257 systemd[1]: Started sshd@20-172.31.17.56:22-139.178.68.195:55640.service - OpenSSH per-connection server daemon (139.178.68.195:55640). Aug 5 21:37:50.280084 sshd[5844]: Accepted publickey for core from 139.178.68.195 port 55640 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:50.283815 sshd[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:50.307936 systemd-logind[1994]: New session 21 of user core. Aug 5 21:37:50.315767 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 21:37:50.396342 systemd[1]: run-containerd-runc-k8s.io-803410d1a08c9dfe3855bf34aae6b0bc46276579b7b5643a60ed4ceda667965c-runc.masb6b.mount: Deactivated successfully. Aug 5 21:37:50.718877 sshd[5844]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:50.727351 systemd[1]: sshd@20-172.31.17.56:22-139.178.68.195:55640.service: Deactivated successfully. Aug 5 21:37:50.735735 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 21:37:50.741608 systemd-logind[1994]: Session 21 logged out. Waiting for processes to exit. Aug 5 21:37:50.746738 systemd-logind[1994]: Removed session 21. Aug 5 21:37:51.484218 kubelet[3485]: I0805 21:37:51.484145 3485 topology_manager.go:215] "Topology Admit Handler" podUID="721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc" podNamespace="calico-apiserver" podName="calico-apiserver-684686b6d9-vhpl7" Aug 5 21:37:51.511453 systemd[1]: Created slice kubepods-besteffort-pod721490d4_f1e7_46dd_b8ec_aa9f2f07d2bc.slice - libcontainer container kubepods-besteffort-pod721490d4_f1e7_46dd_b8ec_aa9f2f07d2bc.slice. Aug 5 21:37:51.549043 kubelet[3485]: I0805 21:37:51.548053 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx5zt\" (UniqueName: \"kubernetes.io/projected/721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc-kube-api-access-rx5zt\") pod \"calico-apiserver-684686b6d9-vhpl7\" (UID: \"721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc\") " pod="calico-apiserver/calico-apiserver-684686b6d9-vhpl7" Aug 5 21:37:51.549043 kubelet[3485]: I0805 21:37:51.548147 3485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc-calico-apiserver-certs\") pod \"calico-apiserver-684686b6d9-vhpl7\" (UID: \"721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc\") " pod="calico-apiserver/calico-apiserver-684686b6d9-vhpl7" Aug 5 21:37:51.651288 kubelet[3485]: E0805 21:37:51.651244 3485 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 21:37:51.652475 kubelet[3485]: E0805 21:37:51.651607 3485 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc-calico-apiserver-certs podName:721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc nodeName:}" failed. No retries permitted until 2024-08-05 21:37:52.151570254 +0000 UTC m=+95.494213693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc-calico-apiserver-certs") pod "calico-apiserver-684686b6d9-vhpl7" (UID: "721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc") : secret "calico-apiserver-certs" not found Aug 5 21:37:52.421916 containerd[2007]: time="2024-08-05T21:37:52.421787402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684686b6d9-vhpl7,Uid:721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc,Namespace:calico-apiserver,Attempt:0,}" Aug 5 21:37:52.857691 systemd-networkd[1845]: cali1f6ccdbd1de: Link UP Aug 5 21:37:52.862843 systemd-networkd[1845]: cali1f6ccdbd1de: Gained carrier Aug 5 21:37:52.872932 (udev-worker)[5903]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.579 [INFO][5882] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0 calico-apiserver-684686b6d9- calico-apiserver 721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc 1092 0 2024-08-05 21:37:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:684686b6d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-56 calico-apiserver-684686b6d9-vhpl7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1f6ccdbd1de [] []}} ContainerID="c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" Namespace="calico-apiserver" Pod="calico-apiserver-684686b6d9-vhpl7" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-" Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.581 [INFO][5882] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" Namespace="calico-apiserver" Pod="calico-apiserver-684686b6d9-vhpl7" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0" Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.686 [INFO][5893] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" HandleID="k8s-pod-network.c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" Workload="ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0" Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.734 [INFO][5893] ipam_plugin.go 264: Auto assigning IP ContainerID="c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" HandleID="k8s-pod-network.c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" Workload="ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b40b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-56", "pod":"calico-apiserver-684686b6d9-vhpl7", "timestamp":"2024-08-05 21:37:52.686484879 +0000 UTC"}, Hostname:"ip-172-31-17-56", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.734 [INFO][5893] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.734 [INFO][5893] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.735 [INFO][5893] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-56' Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.749 [INFO][5893] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" host="ip-172-31-17-56" Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.757 [INFO][5893] ipam.go 372: Looking up existing affinities for host host="ip-172-31-17-56" Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.769 [INFO][5893] ipam.go 489: Trying affinity for 192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.787 [INFO][5893] ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.798 [INFO][5893] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ip-172-31-17-56" Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.798 [INFO][5893] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" host="ip-172-31-17-56" Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.804 [INFO][5893] ipam.go 1685: Creating new handle: k8s-pod-network.c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73 Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.815 [INFO][5893] ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" host="ip-172-31-17-56" Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.831 [INFO][5893] ipam.go 1216: Successfully claimed IPs: [192.168.115.133/26] block=192.168.115.128/26 handle="k8s-pod-network.c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" host="ip-172-31-17-56" Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.831 [INFO][5893] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.133/26] handle="k8s-pod-network.c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" host="ip-172-31-17-56" Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.831 [INFO][5893] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:52.907161 containerd[2007]: 2024-08-05 21:37:52.832 [INFO][5893] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.115.133/26] IPv6=[] ContainerID="c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" HandleID="k8s-pod-network.c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" Workload="ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0" Aug 5 21:37:52.911817 containerd[2007]: 2024-08-05 21:37:52.836 [INFO][5882] k8s.go 386: Populated endpoint ContainerID="c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" Namespace="calico-apiserver" Pod="calico-apiserver-684686b6d9-vhpl7" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0", GenerateName:"calico-apiserver-684686b6d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684686b6d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"", Pod:"calico-apiserver-684686b6d9-vhpl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f6ccdbd1de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:52.911817 containerd[2007]: 2024-08-05 21:37:52.837 [INFO][5882] k8s.go 387: Calico CNI using IPs: [192.168.115.133/32] ContainerID="c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" Namespace="calico-apiserver" Pod="calico-apiserver-684686b6d9-vhpl7" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0" Aug 5 21:37:52.911817 containerd[2007]: 2024-08-05 21:37:52.837 [INFO][5882] dataplane_linux.go 68: Setting the host side veth name to cali1f6ccdbd1de ContainerID="c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" Namespace="calico-apiserver" Pod="calico-apiserver-684686b6d9-vhpl7" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0" Aug 5 21:37:52.911817 containerd[2007]: 2024-08-05 21:37:52.850 [INFO][5882] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" Namespace="calico-apiserver" Pod="calico-apiserver-684686b6d9-vhpl7" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0" Aug 5 21:37:52.911817 containerd[2007]: 2024-08-05 21:37:52.854 [INFO][5882] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" Namespace="calico-apiserver" Pod="calico-apiserver-684686b6d9-vhpl7" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0", GenerateName:"calico-apiserver-684686b6d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684686b6d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-56", ContainerID:"c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73", Pod:"calico-apiserver-684686b6d9-vhpl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f6ccdbd1de", MAC:"62:ee:a9:99:8a:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:52.911817 containerd[2007]: 2024-08-05 21:37:52.895 [INFO][5882] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73" Namespace="calico-apiserver" Pod="calico-apiserver-684686b6d9-vhpl7" WorkloadEndpoint="ip--172--31--17--56-k8s-calico--apiserver--684686b6d9--vhpl7-eth0" Aug 5 21:37:53.011304 containerd[2007]: time="2024-08-05T21:37:53.008607409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:53.011304 containerd[2007]: time="2024-08-05T21:37:53.008733265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:53.011304 containerd[2007]: time="2024-08-05T21:37:53.008783065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:53.011304 containerd[2007]: time="2024-08-05T21:37:53.008812273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:53.113680 systemd[1]: Started cri-containerd-c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73.scope - libcontainer container c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73. Aug 5 21:37:53.300590 containerd[2007]: time="2024-08-05T21:37:53.300413402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684686b6d9-vhpl7,Uid:721490d4-f1e7-46dd-b8ec-aa9f2f07d2bc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73\"" Aug 5 21:37:53.309055 containerd[2007]: time="2024-08-05T21:37:53.307736738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 21:37:54.120791 systemd-networkd[1845]: cali1f6ccdbd1de: Gained IPv6LL Aug 5 21:37:55.784061 systemd[1]: Started sshd@21-172.31.17.56:22-139.178.68.195:43242.service - OpenSSH per-connection server daemon (139.178.68.195:43242). Aug 5 21:37:56.006811 sshd[5963]: Accepted publickey for core from 139.178.68.195 port 43242 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:56.019586 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:56.065888 systemd-logind[1994]: New session 22 of user core. Aug 5 21:37:56.074716 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 21:37:56.258328 ntpd[1987]: Listen normally on 13 cali1f6ccdbd1de [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 21:37:56.263332 ntpd[1987]: 5 Aug 21:37:56 ntpd[1987]: Listen normally on 13 cali1f6ccdbd1de [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 21:37:56.488585 sshd[5963]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:56.512000 systemd[1]: sshd@21-172.31.17.56:22-139.178.68.195:43242.service: Deactivated successfully. Aug 5 21:37:56.533855 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 21:37:56.547674 systemd-logind[1994]: Session 22 logged out. Waiting for processes to exit. Aug 5 21:37:56.558992 systemd-logind[1994]: Removed session 22. Aug 5 21:37:56.753105 containerd[2007]: time="2024-08-05T21:37:56.750585967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:56.754073 containerd[2007]: time="2024-08-05T21:37:56.753964879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Aug 5 21:37:56.757543 containerd[2007]: time="2024-08-05T21:37:56.757421779Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:56.770655 containerd[2007]: time="2024-08-05T21:37:56.770557639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:56.775459 containerd[2007]: time="2024-08-05T21:37:56.774760279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 3.466935497s" Aug 5 21:37:56.775853 containerd[2007]: time="2024-08-05T21:37:56.775694359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Aug 5 21:37:56.783039 containerd[2007]: time="2024-08-05T21:37:56.782937283Z" level=info msg="CreateContainer within sandbox \"c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 21:37:56.832058 containerd[2007]: time="2024-08-05T21:37:56.831993272Z" level=info msg="CreateContainer within sandbox \"c9fb52086d814d24650f1047e332caa539a4eb4c8e691680a175f888bc675f73\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4354561e279208d787c0f2d7db7d8e955ee12c8e03d5e8863502d0a6e823da9a\"" Aug 5 21:37:56.836112 containerd[2007]: time="2024-08-05T21:37:56.835974644Z" level=info msg="StartContainer for \"4354561e279208d787c0f2d7db7d8e955ee12c8e03d5e8863502d0a6e823da9a\"" Aug 5 21:37:56.989933 systemd[1]: Started cri-containerd-4354561e279208d787c0f2d7db7d8e955ee12c8e03d5e8863502d0a6e823da9a.scope - libcontainer container 4354561e279208d787c0f2d7db7d8e955ee12c8e03d5e8863502d0a6e823da9a. Aug 5 21:37:57.134719 containerd[2007]: time="2024-08-05T21:37:57.134663765Z" level=info msg="StartContainer for \"4354561e279208d787c0f2d7db7d8e955ee12c8e03d5e8863502d0a6e823da9a\" returns successfully" Aug 5 21:37:58.061756 kubelet[3485]: I0805 21:37:58.061645 3485 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-684686b6d9-vhpl7" podStartSLOduration=3.591548225 podCreationTimestamp="2024-08-05 21:37:51 +0000 UTC" firstStartedPulling="2024-08-05 21:37:53.306643874 +0000 UTC m=+96.649287301" lastFinishedPulling="2024-08-05 21:37:56.776622223 +0000 UTC m=+100.119265650" observedRunningTime="2024-08-05 21:37:58.059560434 +0000 UTC m=+101.402203969" watchObservedRunningTime="2024-08-05 21:37:58.061526574 +0000 UTC m=+101.404170013" Aug 5 21:38:01.527319 systemd[1]: Started sshd@22-172.31.17.56:22-139.178.68.195:43694.service - OpenSSH per-connection server daemon (139.178.68.195:43694). Aug 5 21:38:01.720961 sshd[6080]: Accepted publickey for core from 139.178.68.195 port 43694 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:01.725845 sshd[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:01.738117 systemd-logind[1994]: New session 23 of user core. Aug 5 21:38:01.747293 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 21:38:02.040086 sshd[6080]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:02.049066 systemd-logind[1994]: Session 23 logged out. Waiting for processes to exit. Aug 5 21:38:02.050236 systemd[1]: sshd@22-172.31.17.56:22-139.178.68.195:43694.service: Deactivated successfully. Aug 5 21:38:02.060099 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 21:38:02.066969 systemd-logind[1994]: Removed session 23. Aug 5 21:38:07.088339 systemd[1]: Started sshd@23-172.31.17.56:22-139.178.68.195:43698.service - OpenSSH per-connection server daemon (139.178.68.195:43698). Aug 5 21:38:07.288413 sshd[6095]: Accepted publickey for core from 139.178.68.195 port 43698 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:07.293440 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:07.308947 systemd-logind[1994]: New session 24 of user core. Aug 5 21:38:07.317015 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 21:38:07.578613 sshd[6095]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:07.589563 systemd[1]: sshd@23-172.31.17.56:22-139.178.68.195:43698.service: Deactivated successfully. Aug 5 21:38:07.596480 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 21:38:07.599336 systemd-logind[1994]: Session 24 logged out. Waiting for processes to exit. Aug 5 21:38:07.603461 systemd-logind[1994]: Removed session 24. Aug 5 21:38:12.624223 systemd[1]: Started sshd@24-172.31.17.56:22-139.178.68.195:58030.service - OpenSSH per-connection server daemon (139.178.68.195:58030). Aug 5 21:38:12.834175 sshd[6115]: Accepted publickey for core from 139.178.68.195 port 58030 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:12.841520 sshd[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:12.854530 systemd-logind[1994]: New session 25 of user core. Aug 5 21:38:12.864642 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 21:38:13.223013 sshd[6115]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:13.235461 systemd[1]: sshd@24-172.31.17.56:22-139.178.68.195:58030.service: Deactivated successfully. Aug 5 21:38:13.246869 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 21:38:13.255517 systemd-logind[1994]: Session 25 logged out. Waiting for processes to exit. Aug 5 21:38:13.260165 systemd-logind[1994]: Removed session 25. Aug 5 21:38:18.264944 systemd[1]: Started sshd@25-172.31.17.56:22-139.178.68.195:58040.service - OpenSSH per-connection server daemon (139.178.68.195:58040). Aug 5 21:38:18.447784 sshd[6137]: Accepted publickey for core from 139.178.68.195 port 58040 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:18.452558 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:18.466425 systemd-logind[1994]: New session 26 of user core. Aug 5 21:38:18.477300 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 21:38:18.787756 sshd[6137]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:18.795849 systemd[1]: sshd@25-172.31.17.56:22-139.178.68.195:58040.service: Deactivated successfully. Aug 5 21:38:18.801598 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 21:38:18.809624 systemd-logind[1994]: Session 26 logged out. Waiting for processes to exit. Aug 5 21:38:18.812489 systemd-logind[1994]: Removed session 26. Aug 5 21:38:32.522479 systemd[1]: cri-containerd-4994fb92994ee490b3842c252570a30b280d0f5ada553ebd743845a5735d3434.scope: Deactivated successfully. Aug 5 21:38:32.524535 systemd[1]: cri-containerd-4994fb92994ee490b3842c252570a30b280d0f5ada553ebd743845a5735d3434.scope: Consumed 6.013s CPU time, 22.1M memory peak, 0B memory swap peak. Aug 5 21:38:32.585796 containerd[2007]: time="2024-08-05T21:38:32.585657413Z" level=info msg="shim disconnected" id=4994fb92994ee490b3842c252570a30b280d0f5ada553ebd743845a5735d3434 namespace=k8s.io Aug 5 21:38:32.585796 containerd[2007]: time="2024-08-05T21:38:32.585782561Z" level=warning msg="cleaning up after shim disconnected" id=4994fb92994ee490b3842c252570a30b280d0f5ada553ebd743845a5735d3434 namespace=k8s.io Aug 5 21:38:32.589356 containerd[2007]: time="2024-08-05T21:38:32.585806585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:38:32.593656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4994fb92994ee490b3842c252570a30b280d0f5ada553ebd743845a5735d3434-rootfs.mount: Deactivated successfully. Aug 5 21:38:32.627593 systemd[1]: cri-containerd-85aae7775eaaaf27f071b61e93ded43f2f934d7bcb9abe8ddc70f2951bfc11ac.scope: Deactivated successfully. Aug 5 21:38:32.630996 systemd[1]: cri-containerd-85aae7775eaaaf27f071b61e93ded43f2f934d7bcb9abe8ddc70f2951bfc11ac.scope: Consumed 12.957s CPU time. Aug 5 21:38:32.686925 containerd[2007]: time="2024-08-05T21:38:32.686694438Z" level=info msg="shim disconnected" id=85aae7775eaaaf27f071b61e93ded43f2f934d7bcb9abe8ddc70f2951bfc11ac namespace=k8s.io Aug 5 21:38:32.688837 containerd[2007]: time="2024-08-05T21:38:32.687193422Z" level=warning msg="cleaning up after shim disconnected" id=85aae7775eaaaf27f071b61e93ded43f2f934d7bcb9abe8ddc70f2951bfc11ac namespace=k8s.io Aug 5 21:38:32.688837 containerd[2007]: time="2024-08-05T21:38:32.688543278Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:38:32.689289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85aae7775eaaaf27f071b61e93ded43f2f934d7bcb9abe8ddc70f2951bfc11ac-rootfs.mount: Deactivated successfully. Aug 5 21:38:33.201167 kubelet[3485]: I0805 21:38:33.199861 3485 scope.go:117] "RemoveContainer" containerID="4994fb92994ee490b3842c252570a30b280d0f5ada553ebd743845a5735d3434" Aug 5 21:38:33.209252 kubelet[3485]: I0805 21:38:33.208779 3485 scope.go:117] "RemoveContainer" containerID="85aae7775eaaaf27f071b61e93ded43f2f934d7bcb9abe8ddc70f2951bfc11ac" Aug 5 21:38:33.212052 containerd[2007]: time="2024-08-05T21:38:33.211681348Z" level=info msg="CreateContainer within sandbox \"ec5dc293df62715c0e55a60395495f2d365cc09b4c8947ba8794846204704410\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 5 21:38:33.219557 containerd[2007]: time="2024-08-05T21:38:33.218728276Z" level=info msg="CreateContainer within sandbox \"52f02618b273a8b4a159d7dbbcd03f25d74d44a1af11abcbe9711dcff2ef322e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 5 21:38:33.266719 containerd[2007]: time="2024-08-05T21:38:33.266308145Z" level=info msg="CreateContainer within sandbox \"ec5dc293df62715c0e55a60395495f2d365cc09b4c8947ba8794846204704410\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4000fda1eb8b3537c1e7828b42665bb0eb8a3f64fdd61ad60d4774457e2811e7\"" Aug 5 21:38:33.268781 containerd[2007]: time="2024-08-05T21:38:33.268701737Z" level=info msg="StartContainer for \"4000fda1eb8b3537c1e7828b42665bb0eb8a3f64fdd61ad60d4774457e2811e7\"" Aug 5 21:38:33.274187 containerd[2007]: time="2024-08-05T21:38:33.272839337Z" level=info msg="CreateContainer within sandbox \"52f02618b273a8b4a159d7dbbcd03f25d74d44a1af11abcbe9711dcff2ef322e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e5e8a142a71209c97a3df3450fd7e4e002c6508a412ff57bb7223919f1367de1\"" Aug 5 21:38:33.279402 containerd[2007]: time="2024-08-05T21:38:33.277023929Z" level=info msg="StartContainer for \"e5e8a142a71209c97a3df3450fd7e4e002c6508a412ff57bb7223919f1367de1\"" Aug 5 21:38:33.362543 systemd[1]: Started cri-containerd-4000fda1eb8b3537c1e7828b42665bb0eb8a3f64fdd61ad60d4774457e2811e7.scope - libcontainer container 4000fda1eb8b3537c1e7828b42665bb0eb8a3f64fdd61ad60d4774457e2811e7. Aug 5 21:38:33.402777 systemd[1]: Started cri-containerd-e5e8a142a71209c97a3df3450fd7e4e002c6508a412ff57bb7223919f1367de1.scope - libcontainer container e5e8a142a71209c97a3df3450fd7e4e002c6508a412ff57bb7223919f1367de1. Aug 5 21:38:33.525460 containerd[2007]: time="2024-08-05T21:38:33.522768174Z" level=info msg="StartContainer for \"e5e8a142a71209c97a3df3450fd7e4e002c6508a412ff57bb7223919f1367de1\" returns successfully" Aug 5 21:38:33.525460 containerd[2007]: time="2024-08-05T21:38:33.522768318Z" level=info msg="StartContainer for \"4000fda1eb8b3537c1e7828b42665bb0eb8a3f64fdd61ad60d4774457e2811e7\" returns successfully" Aug 5 21:38:38.806245 systemd[1]: cri-containerd-b63ba31cf776c30a6e93b13f6e2c405e08bc4a6fcf83e5d89931904c35ed8b66.scope: Deactivated successfully. Aug 5 21:38:38.807107 systemd[1]: cri-containerd-b63ba31cf776c30a6e93b13f6e2c405e08bc4a6fcf83e5d89931904c35ed8b66.scope: Consumed 3.387s CPU time, 16.2M memory peak, 0B memory swap peak. Aug 5 21:38:38.857835 kubelet[3485]: E0805 21:38:38.856039 3485 controller.go:193] "Failed to update lease" err="Put \"https://172.31.17.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-56?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 5 21:38:38.886297 containerd[2007]: time="2024-08-05T21:38:38.884441256Z" level=info msg="shim disconnected" id=b63ba31cf776c30a6e93b13f6e2c405e08bc4a6fcf83e5d89931904c35ed8b66 namespace=k8s.io Aug 5 21:38:38.886297 containerd[2007]: time="2024-08-05T21:38:38.884602584Z" level=warning msg="cleaning up after shim disconnected" id=b63ba31cf776c30a6e93b13f6e2c405e08bc4a6fcf83e5d89931904c35ed8b66 namespace=k8s.io Aug 5 21:38:38.886297 containerd[2007]: time="2024-08-05T21:38:38.884691948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:38:38.885946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b63ba31cf776c30a6e93b13f6e2c405e08bc4a6fcf83e5d89931904c35ed8b66-rootfs.mount: Deactivated successfully. Aug 5 21:38:39.247722 kubelet[3485]: I0805 21:38:39.246640 3485 scope.go:117] "RemoveContainer" containerID="b63ba31cf776c30a6e93b13f6e2c405e08bc4a6fcf83e5d89931904c35ed8b66" Aug 5 21:38:39.253040 containerd[2007]: time="2024-08-05T21:38:39.252985018Z" level=info msg="CreateContainer within sandbox \"ffcab1c8c194914d0743aa78f285723e19d380c02323bc4425aaa9a23be6c8b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 5 21:38:39.289860 containerd[2007]: time="2024-08-05T21:38:39.289775050Z" level=info msg="CreateContainer within sandbox \"ffcab1c8c194914d0743aa78f285723e19d380c02323bc4425aaa9a23be6c8b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f20befadec25dadf9533643a897a4a8e4fb4ab13dbb6e8b6baacd0caa03ddadf\"" Aug 5 21:38:39.293449 containerd[2007]: time="2024-08-05T21:38:39.291958294Z" level=info msg="StartContainer for \"f20befadec25dadf9533643a897a4a8e4fb4ab13dbb6e8b6baacd0caa03ddadf\"" Aug 5 21:38:39.372019 systemd[1]: Started cri-containerd-f20befadec25dadf9533643a897a4a8e4fb4ab13dbb6e8b6baacd0caa03ddadf.scope - libcontainer container f20befadec25dadf9533643a897a4a8e4fb4ab13dbb6e8b6baacd0caa03ddadf. Aug 5 21:38:39.455842 containerd[2007]: time="2024-08-05T21:38:39.455777543Z" level=info msg="StartContainer for \"f20befadec25dadf9533643a897a4a8e4fb4ab13dbb6e8b6baacd0caa03ddadf\" returns successfully" Aug 5 21:38:39.883622 systemd[1]: run-containerd-runc-k8s.io-f20befadec25dadf9533643a897a4a8e4fb4ab13dbb6e8b6baacd0caa03ddadf-runc.n0vuzp.mount: Deactivated successfully. Aug 5 21:38:48.857233 kubelet[3485]: E0805 21:38:48.856767 3485 controller.go:193] "Failed to update lease" err="Put \"https://172.31.17.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-56?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"